Jan 17 12:17:49.093663 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:17:49.093714 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:49.093736 kernel: BIOS-provided physical RAM map: Jan 17 12:17:49.093748 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:17:49.093759 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:17:49.093770 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:17:49.093786 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:17:49.093799 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:17:49.093811 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:17:49.093826 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:17:49.093839 kernel: NX (Execute Disable) protection: active Jan 17 12:17:49.093852 kernel: APIC: Static calls initialized Jan 17 12:17:49.093873 kernel: SMBIOS 2.8 present. Jan 17 12:17:49.093887 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:17:49.093901 kernel: Hypervisor detected: KVM Jan 17 12:17:49.093917 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:17:49.093937 kernel: kvm-clock: using sched offset of 3756684074 cycles Jan 17 12:17:49.093953 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:17:49.093967 kernel: tsc: Detected 1995.312 MHz processor Jan 17 12:17:49.093983 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:17:49.093996 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:17:49.094011 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:17:49.094026 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:17:49.094038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:17:49.094056 kernel: ACPI: Early table checksum verification disabled Jan 17 12:17:49.094070 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:17:49.094085 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094100 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094115 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094129 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:17:49.094143 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094158 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094172 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094189 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:49.094203 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:17:49.094217 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:17:49.094231 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:17:49.094260 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:17:49.094273 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:17:49.094289 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:17:49.094314 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:17:49.094326 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:17:49.094340 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:17:49.094373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:17:49.094388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:17:49.094408 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:17:49.094421 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:17:49.094440 kernel: Zone ranges: Jan 17 12:17:49.094455 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:17:49.094467 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:17:49.094479 kernel: Normal empty Jan 17 12:17:49.094492 kernel: Movable zone start for each node Jan 17 12:17:49.094507 kernel: Early memory node ranges Jan 17 12:17:49.094523 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:17:49.094536 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:17:49.094550 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:17:49.094569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:17:49.094582 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:17:49.094602 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:17:49.094617 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:17:49.094628 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:17:49.094640 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:17:49.094652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:17:49.094667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:17:49.094683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:17:49.094700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:17:49.094715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:17:49.094731 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:17:49.094744 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:17:49.094758 kernel: TSC deadline timer available Jan 17 12:17:49.094773 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:17:49.094786 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:17:49.094797 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:17:49.094815 kernel: Booting paravirtualized kernel on KVM Jan 17 12:17:49.094831 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:17:49.094852 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:17:49.094865 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:17:49.094880 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:17:49.094894 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:17:49.094906 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:17:49.094919 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:49.094933 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:17:49.094952 kernel: random: crng init done Jan 17 12:17:49.094967 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:17:49.094980 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:17:49.094995 kernel: Fallback order for Node 0: 0 Jan 17 12:17:49.095010 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:17:49.095034 kernel: Policy zone: DMA32 Jan 17 12:17:49.095050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:17:49.095072 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:17:49.095085 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:17:49.095104 kernel: Kernel/User page tables isolation: enabled Jan 17 12:17:49.095119 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:17:49.095133 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:17:49.095147 kernel: Dynamic Preempt: voluntary Jan 17 12:17:49.095163 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:17:49.095180 kernel: rcu: RCU event tracing is enabled. Jan 17 12:17:49.095191 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:17:49.095204 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:17:49.095220 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:17:49.095239 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:17:49.095250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:17:49.095262 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:17:49.095275 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:17:49.095291 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:17:49.095311 kernel: Console: colour VGA+ 80x25 Jan 17 12:17:49.095325 kernel: printk: console [tty0] enabled Jan 17 12:17:49.095340 kernel: printk: console [ttyS0] enabled Jan 17 12:17:49.095367 kernel: ACPI: Core revision 20230628 Jan 17 12:17:49.095381 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:17:49.095401 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:17:49.095415 kernel: x2apic enabled Jan 17 12:17:49.095431 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:17:49.095445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:17:49.095457 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:17:49.095469 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jan 17 12:17:49.095483 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:17:49.095498 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:17:49.095528 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:17:49.095545 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:17:49.095561 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:17:49.095579 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:17:49.095595 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:17:49.095609 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:17:49.095621 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:17:49.095635 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:17:49.095651 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:17:49.095676 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:17:49.095691 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:17:49.095707 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:17:49.095723 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:17:49.095737 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:17:49.095751 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:17:49.095762 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:17:49.095775 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:17:49.095795 kernel: landlock: Up and running. Jan 17 12:17:49.095807 kernel: SELinux: Initializing. Jan 17 12:17:49.095820 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:49.095836 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:49.095852 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:17:49.095868 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:49.095885 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:49.095901 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:49.095916 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:17:49.095931 kernel: signal: max sigframe size: 1776 Jan 17 12:17:49.095944 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:17:49.095955 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:17:49.095967 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:17:49.095978 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:17:49.095990 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:17:49.096005 kernel: .... node #0, CPUs: #1 Jan 17 12:17:49.096019 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:17:49.096039 kernel: smpboot: Max logical packages: 1 Jan 17 12:17:49.096056 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 17 12:17:49.096068 kernel: devtmpfs: initialized Jan 17 12:17:49.096081 kernel: x86/mm: Memory block size: 128MB Jan 17 12:17:49.096097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:17:49.096113 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:17:49.096128 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:17:49.096145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:17:49.096161 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:17:49.096176 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:17:49.096197 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:17:49.096212 kernel: audit: type=2000 audit(1737116267.500:1): state=initialized audit_enabled=0 res=1 Jan 17 12:17:49.096226 kernel: cpuidle: using governor menu Jan 17 12:17:49.096242 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:17:49.096259 kernel: dca service started, version 1.12.1 Jan 17 12:17:49.096273 kernel: PCI: Using configuration type 1 for base access Jan 17 12:17:49.096287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:17:49.096302 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:17:49.096318 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:17:49.096337 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:17:49.096391 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:17:49.096407 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:17:49.096419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:17:49.096434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:17:49.096447 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:17:49.096461 kernel: ACPI: Interpreter enabled Jan 17 12:17:49.096478 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:17:49.096495 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:17:49.096517 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:17:49.096534 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:17:49.096549 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:17:49.096565 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:17:49.096981 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:17:49.097232 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:17:49.097479 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:17:49.097514 kernel: acpiphp: Slot [3] registered Jan 17 12:17:49.097527 kernel: acpiphp: Slot [4] registered Jan 17 12:17:49.097541 kernel: acpiphp: Slot [5] registered Jan 17 12:17:49.097557 kernel: acpiphp: Slot [6] registered Jan 17 12:17:49.097572 kernel: acpiphp: Slot [7] registered Jan 17 12:17:49.097588 kernel: acpiphp: Slot [8] registered Jan 17 12:17:49.097603 kernel: acpiphp: Slot [9] registered Jan 17 12:17:49.097617 kernel: acpiphp: Slot [10] registered Jan 17 12:17:49.097631 kernel: acpiphp: Slot [11] registered Jan 17 12:17:49.097669 kernel: acpiphp: Slot [12] registered Jan 17 12:17:49.097680 kernel: acpiphp: Slot [13] registered Jan 17 12:17:49.097688 kernel: acpiphp: Slot [14] registered Jan 17 12:17:49.097696 kernel: acpiphp: Slot [15] registered Jan 17 12:17:49.097705 kernel: acpiphp: Slot [16] registered Jan 17 12:17:49.097713 kernel: acpiphp: Slot [17] registered Jan 17 12:17:49.097722 kernel: acpiphp: Slot [18] registered Jan 17 12:17:49.097730 kernel: acpiphp: Slot [19] registered Jan 17 12:17:49.097738 kernel: acpiphp: Slot [20] registered Jan 17 12:17:49.097747 kernel: acpiphp: Slot [21] registered Jan 17 12:17:49.097758 kernel: acpiphp: Slot [22] registered Jan 17 12:17:49.097766 kernel: acpiphp: Slot [23] registered Jan 17 12:17:49.097775 kernel: acpiphp: Slot [24] registered Jan 17 12:17:49.097783 kernel: acpiphp: Slot [25] registered Jan 17 12:17:49.097791 kernel: acpiphp: Slot [26] registered Jan 17 12:17:49.097800 kernel: acpiphp: Slot [27] registered Jan 17 12:17:49.097808 kernel: acpiphp: Slot [28] registered Jan 17 12:17:49.097816 kernel: acpiphp: Slot [29] registered Jan 17 12:17:49.097824 kernel: acpiphp: Slot [30] registered Jan 17 12:17:49.097836 kernel: acpiphp: Slot [31] registered Jan 17 12:17:49.097845 kernel: PCI host bridge to bus 0000:00 Jan 17 12:17:49.098069 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:17:49.098196 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:17:49.098287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:17:49.099510 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:17:49.099671 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:17:49.099809 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:17:49.100023 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:17:49.100188 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:17:49.100371 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:17:49.100488 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:17:49.100590 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:17:49.100716 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:17:49.101699 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:17:49.101860 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:17:49.102003 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:17:49.102121 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:17:49.102263 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:17:49.103780 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:17:49.104006 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:17:49.104222 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:17:49.104574 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:17:49.104720 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:17:49.104831 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:17:49.104958 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:17:49.105073 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:17:49.105213 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:49.105314 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:17:49.105484 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:17:49.105625 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:17:49.105775 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:49.105875 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:17:49.106017 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:17:49.106133 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:17:49.106259 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:17:49.109725 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:17:49.109933 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:17:49.110093 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:17:49.110317 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:49.110585 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:17:49.110768 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:17:49.110934 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:17:49.111173 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:49.113608 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:17:49.113851 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:17:49.113996 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:17:49.114197 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:17:49.115501 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:17:49.115725 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:17:49.115763 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:17:49.115781 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:17:49.115812 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:17:49.115828 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:17:49.115854 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:17:49.115870 kernel: iommu: Default domain type: Translated Jan 17 12:17:49.115886 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:17:49.115902 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:17:49.115918 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:17:49.115933 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:17:49.115949 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:17:49.116129 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:17:49.116280 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:17:49.117099 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:17:49.117131 kernel: vgaarb: loaded Jan 17 12:17:49.117148 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:17:49.117163 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:17:49.117179 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:17:49.117195 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:17:49.117212 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:17:49.117227 kernel: pnp: PnP ACPI init Jan 17 12:17:49.117243 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:17:49.117268 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:17:49.117284 kernel: NET: Registered PF_INET protocol family Jan 17 12:17:49.117299 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:17:49.117336 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:17:49.117581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:17:49.117601 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:17:49.117619 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:17:49.117635 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:17:49.117651 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:49.117675 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:49.117691 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:17:49.117707 kernel: NET: Registered PF_XDP protocol family Jan 17 12:17:49.117895 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:17:49.118037 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:17:49.118193 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:17:49.118334 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:17:49.118484 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:17:49.118657 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:17:49.118817 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:17:49.118840 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:17:49.119033 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 54414 usecs Jan 17 12:17:49.119056 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:17:49.119072 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:17:49.119088 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:17:49.119105 kernel: Initialise system trusted keyrings Jan 17 12:17:49.119127 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:17:49.119140 kernel: Key type asymmetric registered Jan 17 12:17:49.119153 kernel: Asymmetric key parser 'x509' registered Jan 17 12:17:49.119167 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:17:49.119183 kernel: io scheduler mq-deadline registered Jan 17 12:17:49.119198 kernel: io scheduler kyber registered Jan 17 12:17:49.119214 kernel: io scheduler bfq registered Jan 17 12:17:49.119230 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:17:49.119246 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:17:49.119265 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:17:49.119282 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:17:49.119297 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:17:49.119313 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:17:49.119328 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:17:49.119344 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:17:49.121455 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:17:49.121831 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:17:49.121859 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:17:49.122014 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:17:49.122174 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:17:48 UTC (1737116268) Jan 17 12:17:49.122308 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:17:49.122328 kernel: intel_pstate: CPU model not supported Jan 17 12:17:49.122343 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:17:49.122383 kernel: Segment Routing with IPv6 Jan 17 12:17:49.122400 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:17:49.122417 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:17:49.122440 kernel: Key type dns_resolver registered Jan 17 12:17:49.122456 kernel: IPI shorthand broadcast: enabled Jan 17 12:17:49.122472 kernel: sched_clock: Marking stable (1382005886, 164535504)->(1605555092, -59013702) Jan 17 12:17:49.122488 kernel: registered taskstats version 1 Jan 17 12:17:49.122504 kernel: Loading compiled-in X.509 certificates Jan 17 12:17:49.122520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:17:49.122535 kernel: Key type .fscrypt registered Jan 17 12:17:49.122551 kernel: Key type fscrypt-provisioning registered Jan 17 12:17:49.122580 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:17:49.122602 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:17:49.122617 kernel: ima: No architecture policies found Jan 17 12:17:49.122633 kernel: clk: Disabling unused clocks Jan 17 12:17:49.122660 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:17:49.122675 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:17:49.122718 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:17:49.122734 kernel: Run /init as init process Jan 17 12:17:49.122747 kernel: with arguments: Jan 17 12:17:49.122761 kernel: /init Jan 17 12:17:49.122779 kernel: with environment: Jan 17 12:17:49.122792 kernel: HOME=/ Jan 17 12:17:49.122803 kernel: TERM=linux Jan 17 12:17:49.122812 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:17:49.122825 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:49.122838 systemd[1]: Detected virtualization kvm. Jan 17 12:17:49.122848 systemd[1]: Detected architecture x86-64. Jan 17 12:17:49.122857 systemd[1]: Running in initrd. Jan 17 12:17:49.122870 systemd[1]: No hostname configured, using default hostname. Jan 17 12:17:49.122878 systemd[1]: Hostname set to . Jan 17 12:17:49.122888 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:49.122897 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:17:49.122910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:49.122925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:49.122941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:17:49.122955 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:49.122974 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:17:49.122985 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:17:49.122998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:17:49.123007 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:17:49.123016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:49.123025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:49.123037 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:49.123047 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:49.123056 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:49.123068 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:49.123077 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:49.123086 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:49.123099 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:49.123108 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:49.123117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:49.123126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:49.123135 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:49.123144 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:49.123153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:17:49.123162 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:49.123175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:17:49.123184 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:17:49.123193 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:49.123203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:49.123212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:49.123221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:49.123277 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:17:49.123306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:49.123315 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:17:49.123327 systemd-journald[183]: Journal started Jan 17 12:17:49.125419 systemd-journald[183]: Runtime Journal (/run/log/journal/485e830856174d708458138b21eb237f) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:49.130451 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:49.101437 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:17:49.186751 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:17:49.186831 kernel: Bridge firewalling registered Jan 17 12:17:49.186847 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:49.160225 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:17:49.187820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:49.189022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:49.196175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:49.206789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:49.218165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:49.224685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:49.226530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:49.249563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:49.263675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:49.265904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:49.274769 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:17:49.276668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:49.283625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:49.300842 dracut-cmdline[215]: dracut-dracut-053 Jan 17 12:17:49.307550 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:49.342685 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:17:49.342716 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:49.342764 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:49.351246 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:17:49.353373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:49.354974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:49.432455 kernel: SCSI subsystem initialized Jan 17 12:17:49.446424 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:17:49.460404 kernel: iscsi: registered transport (tcp) Jan 17 12:17:49.496800 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:17:49.496919 kernel: QLogic iSCSI HBA Driver Jan 17 12:17:49.563078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:49.571731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:17:49.621618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:17:49.621729 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:17:49.622410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:17:49.681557 kernel: raid6: avx2x4 gen() 25171 MB/s Jan 17 12:17:49.696415 kernel: raid6: avx2x2 gen() 24414 MB/s Jan 17 12:17:49.713826 kernel: raid6: avx2x1 gen() 15689 MB/s Jan 17 12:17:49.713926 kernel: raid6: using algorithm avx2x4 gen() 25171 MB/s Jan 17 12:17:49.733405 kernel: raid6: .... xor() 7738 MB/s, rmw enabled Jan 17 12:17:49.733584 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:17:49.759409 kernel: xor: automatically using best checksumming function avx Jan 17 12:17:49.947420 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:17:49.964533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:49.971635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:49.996307 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 12:17:50.002333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:50.013245 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:17:50.031495 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 17 12:17:50.073707 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:50.081823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:50.140500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:50.150553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:17:50.174471 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:50.177828 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:50.179538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:50.182676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:50.189740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:17:50.222471 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:50.257388 kernel: libata version 3.00 loaded. Jan 17 12:17:50.259620 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:17:50.293099 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:17:50.293126 kernel: scsi host0: ata_piix Jan 17 12:17:50.293336 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:17:50.319164 kernel: scsi host1: ata_piix Jan 17 12:17:50.319372 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:17:50.319396 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:17:50.319408 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:17:50.319522 kernel: scsi host2: Virtio SCSI HBA Jan 17 12:17:50.319634 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:17:50.319645 kernel: GPT:9289727 != 125829119 Jan 17 12:17:50.319655 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:17:50.319665 kernel: GPT:9289727 != 125829119 Jan 17 12:17:50.319675 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:17:50.319688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:50.319699 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:17:50.319710 kernel: AES CTR mode by8 optimization enabled Jan 17 12:17:50.321712 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:17:50.338604 kernel: ACPI: bus type USB registered Jan 17 12:17:50.338637 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 17 12:17:50.338829 kernel: usbcore: registered new interface driver usbfs Jan 17 12:17:50.330395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:50.346283 kernel: usbcore: registered new interface driver hub Jan 17 12:17:50.346312 kernel: usbcore: registered new device driver usb Jan 17 12:17:50.330563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:50.335337 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:50.336127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:50.336408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:50.337167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:50.345946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:50.432419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:50.443578 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:50.503645 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Jan 17 12:17:50.513893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:17:50.516090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:50.527602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:17:50.534163 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Jan 17 12:17:50.552830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:50.560817 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:17:50.571256 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:17:50.571603 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:17:50.571821 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:17:50.571992 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:17:50.572134 kernel: hub 1-0:1.0: USB hub found Jan 17 12:17:50.572340 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:17:50.569336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:17:50.577747 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:17:50.590417 disk-uuid[550]: Primary Header is updated. Jan 17 12:17:50.590417 disk-uuid[550]: Secondary Entries is updated. Jan 17 12:17:50.590417 disk-uuid[550]: Secondary Header is updated. Jan 17 12:17:50.595281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:50.601165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:50.607410 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:51.610660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:51.610738 disk-uuid[551]: The operation has completed successfully. Jan 17 12:17:51.665730 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:17:51.665877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:17:51.682716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:17:51.690478 sh[564]: Success Jan 17 12:17:51.712412 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:17:51.822290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:17:51.836709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:17:51.844530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:17:51.865617 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:17:51.871657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:51.871769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:17:51.871798 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:17:51.871820 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:17:51.884847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:17:51.886088 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:17:51.892770 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:17:51.902682 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:17:51.911610 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:51.911718 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:51.911740 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:51.919393 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:51.933912 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:17:51.936797 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:51.945024 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:17:51.954650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:17:52.139818 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:52.144958 ignition[644]: Ignition 2.19.0 Jan 17 12:17:52.144967 ignition[644]: Stage: fetch-offline Jan 17 12:17:52.145024 ignition[644]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:52.145035 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:52.145166 ignition[644]: parsed url from cmdline: "" Jan 17 12:17:52.145172 ignition[644]: no config URL provided Jan 17 12:17:52.145180 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:52.145191 ignition[644]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:52.145199 ignition[644]: failed to fetch config: resource requires networking Jan 17 12:17:52.145970 ignition[644]: Ignition finished successfully Jan 17 12:17:52.156958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:52.159102 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:52.191076 systemd-networkd[755]: lo: Link UP Jan 17 12:17:52.191094 systemd-networkd[755]: lo: Gained carrier Jan 17 12:17:52.194596 systemd-networkd[755]: Enumeration completed Jan 17 12:17:52.195099 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:52.195213 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:52.195219 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:17:52.196448 systemd[1]: Reached target network.target - Network. Jan 17 12:17:52.197377 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:52.197383 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:52.198706 systemd-networkd[755]: eth0: Link UP Jan 17 12:17:52.198713 systemd-networkd[755]: eth0: Gained carrier Jan 17 12:17:52.198725 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:52.206854 systemd-networkd[755]: eth1: Link UP Jan 17 12:17:52.206861 systemd-networkd[755]: eth1: Gained carrier Jan 17 12:17:52.206879 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:52.209703 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:17:52.222506 systemd-networkd[755]: eth0: DHCPv4 address 209.38.133.237/19, gateway 209.38.128.1 acquired from 169.254.169.253 Jan 17 12:17:52.226494 systemd-networkd[755]: eth1: DHCPv4 address 10.124.0.10/20 acquired from 169.254.169.253 Jan 17 12:17:52.233939 ignition[758]: Ignition 2.19.0 Jan 17 12:17:52.233962 ignition[758]: Stage: fetch Jan 17 12:17:52.234324 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:52.234344 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:52.234568 ignition[758]: parsed url from cmdline: "" Jan 17 12:17:52.234575 ignition[758]: no config URL provided Jan 17 12:17:52.234591 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:52.234609 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:52.234641 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:17:52.252456 ignition[758]: GET result: OK Jan 17 12:17:52.252743 ignition[758]: parsing config with SHA512: 0985c16e02450c7c3e1f95bf468ea6847bc9b29917bcdd3cbf5f0e62fdec834212f3b47ceab2e6fa94863f2852e9b39ecbdc0abb32a6b1c6016596ffb6005b57 Jan 17 12:17:52.259990 unknown[758]: fetched base config from "system" Jan 17 12:17:52.260009 unknown[758]: fetched base config from "system" Jan 17 12:17:52.260846 ignition[758]: fetch: fetch complete Jan 17 12:17:52.260031 unknown[758]: fetched user config from "digitalocean" Jan 17 12:17:52.260855 ignition[758]: fetch: fetch passed Jan 17 12:17:52.260928 ignition[758]: Ignition finished successfully Jan 17 12:17:52.264709 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:17:52.273737 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:17:52.316997 ignition[765]: Ignition 2.19.0 Jan 17 12:17:52.317651 ignition[765]: Stage: kargs Jan 17 12:17:52.317980 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:52.317994 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:52.320913 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:17:52.319426 ignition[765]: kargs: kargs passed Jan 17 12:17:52.319500 ignition[765]: Ignition finished successfully Jan 17 12:17:52.327629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:17:52.355019 ignition[771]: Ignition 2.19.0 Jan 17 12:17:52.355038 ignition[771]: Stage: disks Jan 17 12:17:52.355343 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:52.355387 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:52.358711 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:17:52.356638 ignition[771]: disks: disks passed Jan 17 12:17:52.364180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:52.356700 ignition[771]: Ignition finished successfully Jan 17 12:17:52.365626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:52.366788 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:52.368337 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:52.370003 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:52.375603 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:17:52.412568 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:17:52.416627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:17:52.426205 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:17:52.599393 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:17:52.600092 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:17:52.601319 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:52.615625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:52.619561 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:17:52.627111 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:17:52.631482 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 17 12:17:52.638703 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:52.638793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:52.640500 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:52.645974 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:52.643764 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:17:52.648520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:17:52.648570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:52.653127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:52.655224 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:17:52.670728 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:17:52.749400 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:17:52.764382 coreos-metadata[790]: Jan 17 12:17:52.762 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:52.771643 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:17:52.780121 coreos-metadata[790]: Jan 17 12:17:52.776 INFO Fetch successful Jan 17 12:17:52.780884 coreos-metadata[789]: Jan 17 12:17:52.780 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:52.785816 coreos-metadata[790]: Jan 17 12:17:52.785 INFO wrote hostname ci-4081.3.0-8-018bcc3779 to /sysroot/etc/hostname Jan 17 12:17:52.787055 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:17:52.788852 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:52.795579 coreos-metadata[789]: Jan 17 12:17:52.795 INFO Fetch successful Jan 17 12:17:52.797058 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:17:52.804888 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:17:52.805058 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:17:52.946008 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:52.954598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:17:52.965232 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:17:52.977281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:17:52.979132 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:53.005064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:17:53.021042 ignition[908]: INFO : Ignition 2.19.0 Jan 17 12:17:53.021042 ignition[908]: INFO : Stage: mount Jan 17 12:17:53.022690 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:53.022690 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:53.024230 ignition[908]: INFO : mount: mount passed Jan 17 12:17:53.024230 ignition[908]: INFO : Ignition finished successfully Jan 17 12:17:53.024599 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:17:53.032576 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:17:53.049818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:53.062109 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Jan 17 12:17:53.062213 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:53.065324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:53.065662 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:53.072424 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:53.074468 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:53.107373 ignition[936]: INFO : Ignition 2.19.0 Jan 17 12:17:53.107373 ignition[936]: INFO : Stage: files Jan 17 12:17:53.109098 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:53.109098 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:53.109098 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:17:53.111850 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:17:53.111850 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:17:53.113840 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:17:53.113840 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:17:53.115591 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:17:53.114066 unknown[936]: wrote ssh authorized keys file for user: core Jan 17 12:17:53.117804 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:53.117804 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:17:53.162407 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:17:53.258849 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:53.258849 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:17:53.258849 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:17:53.754946 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:17:53.865192 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:53.866680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:53.882991 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:53.882991 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:17:53.882991 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:17:53.882991 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:17:53.882991 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:17:54.140641 systemd-networkd[755]: eth0: Gained IPv6LL Jan 17 12:17:54.141118 systemd-networkd[755]: eth1: Gained IPv6LL Jan 17 12:17:54.278671 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:17:54.590444 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:17:54.590444 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:54.594629 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:54.594629 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:54.594629 ignition[936]: INFO : files: files passed Jan 17 12:17:54.594629 ignition[936]: INFO : Ignition finished successfully Jan 17 12:17:54.598803 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:17:54.609851 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:17:54.614837 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:17:54.618098 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:17:54.618282 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:17:54.652299 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:54.652299 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:54.655410 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:54.659086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:54.659952 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:17:54.667622 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:17:54.702153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:17:54.702283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:17:54.703666 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:17:54.704247 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:17:54.705807 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:17:54.711670 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:17:54.733048 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:54.738718 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:17:54.766124 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:54.767147 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:54.768650 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:17:54.770117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:17:54.770308 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:54.771968 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:17:54.772734 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:17:54.774099 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:17:54.775093 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:54.776463 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:54.778028 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:17:54.779310 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:54.780690 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:17:54.782491 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:17:54.784078 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:17:54.785545 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:17:54.785762 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:54.787318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:54.788183 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:54.789547 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:17:54.789680 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:54.791036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:17:54.791219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:54.793194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:17:54.793971 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:54.795094 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:17:54.795268 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:17:54.796648 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:17:54.796781 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:54.808797 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:17:54.812710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:17:54.813472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:17:54.813701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:54.815772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:17:54.815957 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:54.827728 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:17:54.827880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:17:54.855463 ignition[988]: INFO : Ignition 2.19.0 Jan 17 12:17:54.855463 ignition[988]: INFO : Stage: umount Jan 17 12:17:54.855463 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:54.855463 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:54.869829 ignition[988]: INFO : umount: umount passed Jan 17 12:17:54.869829 ignition[988]: INFO : Ignition finished successfully Jan 17 12:17:54.864798 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:17:54.864968 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:17:54.868964 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:17:54.869101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:17:54.871552 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:17:54.871646 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:17:54.875596 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:17:54.875689 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:17:54.880330 systemd[1]: Stopped target network.target - Network. Jan 17 12:17:54.881215 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:17:54.881390 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:54.883134 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:17:54.884544 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:17:54.888946 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:54.890237 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:17:54.892676 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:17:54.894743 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:17:54.894828 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:54.895981 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:17:54.896033 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:54.897472 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:17:54.897561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:17:54.899084 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:17:54.899135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:54.900604 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:17:54.903424 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:17:54.904470 systemd-networkd[755]: eth1: DHCPv6 lease lost Jan 17 12:17:54.908308 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:17:54.912529 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:17:54.912713 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:17:54.913654 systemd-networkd[755]: eth0: DHCPv6 lease lost Jan 17 12:17:54.916277 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:17:54.916538 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:17:54.917640 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:17:54.917802 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:17:54.922142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:17:54.922253 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:54.923425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:17:54.923512 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:54.931971 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:17:54.932637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:17:54.932736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:54.934098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:17:54.934180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:54.938175 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:17:54.938260 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:54.939292 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:17:54.939377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:54.940863 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:54.956137 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:17:54.956324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:17:54.959421 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:17:54.959642 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:54.961387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:17:54.961624 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:54.963011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:17:54.963062 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:54.964719 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:17:54.964794 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:54.967117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:17:54.967208 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:54.968627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:54.968716 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:54.975664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:17:54.977787 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:17:54.978566 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:54.981043 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:17:54.981133 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:54.981971 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:17:54.982034 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:54.982849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:54.982908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:54.986995 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:17:54.987155 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:17:54.988922 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:17:54.998642 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:17:55.008701 systemd[1]: Switching root. Jan 17 12:17:55.076019 systemd-journald[183]: Journal stopped Jan 17 12:17:56.963034 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:17:56.963139 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:17:56.963162 kernel: SELinux: policy capability open_perms=1 Jan 17 12:17:56.963193 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:17:56.963210 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:17:56.963227 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:17:56.963246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:17:56.963262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:17:56.963281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:17:56.963297 kernel: audit: type=1403 audit(1737116275.334:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:17:56.963317 systemd[1]: Successfully loaded SELinux policy in 54.583ms. Jan 17 12:17:56.963373 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.608ms. Jan 17 12:17:56.963401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:56.963419 systemd[1]: Detected virtualization kvm. Jan 17 12:17:56.963437 systemd[1]: Detected architecture x86-64. Jan 17 12:17:56.963456 systemd[1]: Detected first boot. Jan 17 12:17:56.963475 systemd[1]: Hostname set to . Jan 17 12:17:56.963499 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:56.963517 zram_generator::config[1031]: No configuration found. Jan 17 12:17:56.963545 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:17:56.963562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:17:56.963579 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:17:56.963599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:56.963619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:17:56.963636 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:17:56.963653 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:17:56.963672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:17:56.963690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:17:56.963711 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:17:56.963729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:17:56.963748 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:17:56.963765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:56.963783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:56.963800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:17:56.963818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:17:56.963837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:17:56.963859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:56.963878 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:17:56.963895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:56.963912 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:17:56.963932 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:17:56.963950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:56.963977 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:17:56.963993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:56.964012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:56.964029 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:56.964046 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:56.964070 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:17:56.964089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:17:56.964106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:56.964125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:56.964143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:56.964165 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:17:56.964184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:17:56.964202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:17:56.964221 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:17:56.964238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:56.964256 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:17:56.964274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:17:56.964292 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:17:56.964311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:17:56.964332 systemd[1]: Reached target machines.target - Containers. Jan 17 12:17:56.964369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:17:56.964389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:56.964406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:56.964424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:17:56.964443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:56.964461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:56.964480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:56.964502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:17:56.964520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:56.964539 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:17:56.964556 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:17:56.964575 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:17:56.964592 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:17:56.964610 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:17:56.964628 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:56.964645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:56.964667 kernel: fuse: init (API version 7.39) Jan 17 12:17:56.964687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:17:56.964705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:17:56.964722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:56.964741 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:17:56.964759 systemd[1]: Stopped verity-setup.service. Jan 17 12:17:56.964777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:56.964796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:17:56.964819 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:17:56.964836 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:17:56.964856 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:17:56.964874 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:17:56.964892 kernel: loop: module loaded Jan 17 12:17:56.964913 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:17:56.964934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:56.964956 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:17:56.964974 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:17:56.964992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:56.965011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:56.965032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:56.965054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:56.965076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:17:56.965098 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:17:56.965120 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:56.965140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:56.965162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:56.965182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:17:56.965208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:17:56.965229 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:17:56.965297 systemd-journald[1100]: Collecting audit messages is disabled. Jan 17 12:17:56.967403 systemd-journald[1100]: Journal started Jan 17 12:17:56.967494 systemd-journald[1100]: Runtime Journal (/run/log/journal/485e830856174d708458138b21eb237f) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:56.454159 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:17:56.477618 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:17:56.478266 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:17:56.987382 kernel: ACPI: bus type drm_connector registered Jan 17 12:17:57.012443 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:17:57.024531 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:17:57.031592 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:17:57.034399 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:57.042394 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:17:57.052398 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:17:57.063694 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:17:57.066389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:57.075483 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:17:57.079397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:57.090464 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:17:57.090612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:57.105734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:57.122583 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:17:57.127408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:57.135432 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:57.134970 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:17:57.137055 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:57.137278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:57.138205 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:17:57.139609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:17:57.141518 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:17:57.154088 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:17:57.194933 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:17:57.205879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:17:57.218735 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:17:57.217786 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:17:57.240039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:57.246959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:57.258766 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:17:57.270759 systemd-journald[1100]: Time spent on flushing to /var/log/journal/485e830856174d708458138b21eb237f is 89.744ms for 1000 entries. Jan 17 12:17:57.270759 systemd-journald[1100]: System Journal (/var/log/journal/485e830856174d708458138b21eb237f) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:17:57.402427 systemd-journald[1100]: Received client request to flush runtime journal. Jan 17 12:17:57.402551 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:17:57.402579 kernel: loop1: detected capacity change from 0 to 210664 Jan 17 12:17:57.363959 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:17:57.369080 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jan 17 12:17:57.369094 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jan 17 12:17:57.388743 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:17:57.389698 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:17:57.404929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:57.407713 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:17:57.421541 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:17:57.425801 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:17:57.447403 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:17:57.509587 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:17:57.522495 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:17:57.523698 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:57.569559 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:17:57.604413 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:17:57.613157 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 12:17:57.613194 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 12:17:57.630404 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:17:57.637021 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:57.664714 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:17:57.665738 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 17 12:17:57.671784 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:17:57.671810 systemd[1]: Reloading... Jan 17 12:17:57.810403 zram_generator::config[1206]: No configuration found. Jan 17 12:17:58.165160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:58.207234 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:17:58.270859 systemd[1]: Reloading finished in 598 ms. Jan 17 12:17:58.317302 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:17:58.318837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:17:58.333803 systemd[1]: Starting ensure-sysext.service... Jan 17 12:17:58.338144 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:58.352795 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:17:58.352815 systemd[1]: Reloading... Jan 17 12:17:58.401889 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:17:58.402375 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:17:58.403983 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:17:58.404429 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 17 12:17:58.404608 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 17 12:17:58.411388 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:58.413618 systemd-tmpfiles[1250]: Skipping /boot Jan 17 12:17:58.442551 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:58.442568 systemd-tmpfiles[1250]: Skipping /boot Jan 17 12:17:58.535403 zram_generator::config[1276]: No configuration found. Jan 17 12:17:58.705082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:58.760827 systemd[1]: Reloading finished in 407 ms. Jan 17 12:17:58.779995 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:17:58.786134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:58.799677 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:58.804598 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:17:58.808257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:17:58.820713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:58.823174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:58.828593 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:17:58.838593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.838870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:58.851766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:58.860030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:58.872584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:58.874531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:58.875082 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.886811 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:17:58.889776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.890057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:58.890312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:58.890802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.895937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.896304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:58.907875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:58.908936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:58.909165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:58.911150 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:17:58.915616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:58.916241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:58.921009 systemd[1]: Finished ensure-sysext.service. Jan 17 12:17:58.947573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:17:58.959825 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:17:58.961581 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:58.963481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:58.973564 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:17:58.978316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:58.980821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:58.984981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:58.985622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:59.001262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:17:59.004819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:59.004956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:59.005033 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:59.015509 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 17 12:17:59.022222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:17:59.036505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:17:59.042493 augenrules[1363]: No rules Jan 17 12:17:59.047532 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:59.085558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:59.100689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:59.148653 systemd-resolved[1326]: Positive Trust Anchors: Jan 17 12:17:59.148675 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:59.148722 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:59.156583 systemd-resolved[1326]: Using system hostname 'ci-4081.3.0-8-018bcc3779'. Jan 17 12:17:59.158932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:59.160401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:59.241332 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:17:59.242718 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:59.278528 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:17:59.280455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:59.280640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:59.291621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:59.296662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:59.304620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:59.305727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:59.305870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:59.305898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:59.310297 systemd-networkd[1371]: lo: Link UP Jan 17 12:17:59.310313 systemd-networkd[1371]: lo: Gained carrier Jan 17 12:17:59.320611 systemd-networkd[1371]: Enumeration completed Jan 17 12:17:59.323072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:59.324537 systemd[1]: Reached target network.target - Network. Jan 17 12:17:59.325645 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-aa:86:c0:69:9f:80.network. Jan 17 12:17:59.333734 systemd-networkd[1371]: eth0: Link UP Jan 17 12:17:59.334214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:17:59.336858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:59.337117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:59.338029 systemd-networkd[1371]: eth0: Gained carrier Jan 17 12:17:59.339786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:59.345220 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:17:59.359604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:59.365799 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:17:59.359881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:59.367694 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:17:59.374510 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:17:59.386780 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:59.387051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:59.388861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:59.452442 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:17:59.509688 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:17:59.509725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Jan 17 12:17:59.509749 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:17:59.520603 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-4e:95:2e:7f:e0:1a.network. Jan 17 12:17:59.521924 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:17:59.523948 systemd-networkd[1371]: eth1: Link UP Jan 17 12:17:59.523962 systemd-networkd[1371]: eth1: Gained carrier Jan 17 12:17:59.530122 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:17:59.532310 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:17:59.559434 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:17:59.630547 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:59.639000 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:17:59.639086 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:17:59.651753 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:17:59.651867 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:17:59.651889 kernel: [drm] features: -context_init Jan 17 12:17:59.651908 kernel: [drm] number of scanouts: 1 Jan 17 12:17:59.655840 kernel: [drm] number of cap sets: 0 Jan 17 12:17:59.659399 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:17:59.661736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:59.673308 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:17:59.673434 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:17:59.676535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:59.676804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:59.683522 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:17:59.694421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:59.761344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:17:59.770914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:59.780856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:59.781140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:59.790682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:59.799021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:17:59.867090 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:17:59.900580 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:59.913919 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:59.937580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:59.941262 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:59.978110 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:59.978744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:59.978905 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:59.979234 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:59.979439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:59.979885 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:59.980216 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:59.980642 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:59.981629 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:59.981679 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:59.981775 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:59.986229 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:59.990018 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:18:00.016051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:18:00.030741 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:18:00.033554 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:18:00.037034 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:00.038001 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:00.039788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:00.039836 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:00.042401 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:18:00.047670 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:18:00.059443 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:18:00.069392 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:18:00.081634 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:18:00.098654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:18:00.102277 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:18:00.112623 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:18:00.131613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:18:00.139818 jq[1435]: false Jan 17 12:18:00.144451 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:18:00.151221 coreos-metadata[1433]: Jan 17 12:18:00.151 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:18:00.160651 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:18:00.162526 dbus-daemon[1434]: [system] SELinux support is enabled Jan 17 12:18:00.166286 coreos-metadata[1433]: Jan 17 12:18:00.164 INFO Fetch successful Jan 17 12:18:00.179471 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:18:00.183822 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:18:00.185566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:18:00.195604 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:18:00.201680 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:18:00.204004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:18:00.215750 extend-filesystems[1436]: Found loop4 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found loop5 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found loop6 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found loop7 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda1 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda2 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda3 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found usr Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda4 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda6 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda7 Jan 17 12:18:00.224168 extend-filesystems[1436]: Found vda9 Jan 17 12:18:00.224168 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 17 12:18:00.216152 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:18:00.236005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:18:00.236237 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:18:00.236657 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:18:00.236870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:18:00.266953 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:18:00.267194 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:18:00.300859 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 17 12:18:00.313023 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:18:00.313081 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:18:00.318178 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:18:00.318293 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:18:00.318323 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:18:00.328211 update_engine[1450]: I20250117 12:18:00.327862 1450 main.cc:92] Flatcar Update Engine starting Jan 17 12:18:00.335393 jq[1452]: true Jan 17 12:18:00.338567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1381) Jan 17 12:18:00.347637 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:18:00.360588 update_engine[1450]: I20250117 12:18:00.359154 1450 update_check_scheduler.cc:74] Next update check in 9m52s Jan 17 12:18:00.355187 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:18:00.365428 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:18:00.367566 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:18:00.375398 tar[1457]: linux-amd64/helm Jan 17 12:18:00.380825 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:18:00.424009 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:18:00.426647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:18:00.455396 jq[1474]: true Jan 17 12:18:00.555605 systemd-logind[1446]: New seat seat0. Jan 17 12:18:00.573413 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:18:00.627335 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:18:00.627452 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:18:00.628238 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:18:00.632106 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:18:00.632106 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:18:00.632106 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:18:00.656615 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 17 12:18:00.656615 extend-filesystems[1436]: Found vdb Jan 17 12:18:00.641209 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:18:00.641738 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:18:00.669317 systemd-networkd[1371]: eth1: Gained IPv6LL Jan 17 12:18:00.670135 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:18:00.678550 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:18:00.682127 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:18:00.712881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:00.719865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:18:00.862745 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:00.871304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:18:00.892519 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:18:00.921049 systemd[1]: Starting sshkeys.service... Jan 17 12:18:00.969887 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:18:00.970728 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:18:00.994973 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:18:01.206924 coreos-metadata[1521]: Jan 17 12:18:01.206 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:18:01.219667 coreos-metadata[1521]: Jan 17 12:18:01.219 INFO Fetch successful Jan 17 12:18:01.269772 unknown[1521]: wrote ssh authorized keys file for user: core Jan 17 12:18:01.310514 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 17 12:18:01.313767 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:18:01.407562 update-ssh-keys[1525]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:01.409906 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:18:01.429165 systemd[1]: Finished sshkeys.service. Jan 17 12:18:01.451882 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:18:01.549635 containerd[1469]: time="2025-01-17T12:18:01.546770578Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:18:01.581102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:18:01.615892 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:18:01.676165 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:18:01.676515 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:18:01.691146 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:18:01.733497 containerd[1469]: time="2025-01-17T12:18:01.732291077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.742406 containerd[1469]: time="2025-01-17T12:18:01.742019300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:01.742406 containerd[1469]: time="2025-01-17T12:18:01.742096883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:18:01.742406 containerd[1469]: time="2025-01-17T12:18:01.742129673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:18:01.744237 containerd[1469]: time="2025-01-17T12:18:01.744176157Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:18:01.744921 containerd[1469]: time="2025-01-17T12:18:01.744415408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.744921 containerd[1469]: time="2025-01-17T12:18:01.744598013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:01.744921 containerd[1469]: time="2025-01-17T12:18:01.744719676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.745305 containerd[1469]: time="2025-01-17T12:18:01.745277456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:01.745498 containerd[1469]: time="2025-01-17T12:18:01.745470368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.745596 containerd[1469]: time="2025-01-17T12:18:01.745571644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:01.745660 containerd[1469]: time="2025-01-17T12:18:01.745648000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.747388134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.747814967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.748059296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.748088653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.748245836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:18:01.748561 containerd[1469]: time="2025-01-17T12:18:01.748327963Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:18:01.780115 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:18:01.798440 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.798519902Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.798632225Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.798656805Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.798738076Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.798765046Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:18:01.802690 containerd[1469]: time="2025-01-17T12:18:01.800747705Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804412646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804652006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804678402Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804699084Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804724054Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804751238Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804771224Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804787992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804803922Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804817673Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804831474Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804844113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804868443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.805990 containerd[1469]: time="2025-01-17T12:18:01.804890299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.804909842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.804931359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.804994536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805019430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805038712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805059999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805082567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805107927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805130922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805151620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805170576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805196588Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805236349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805259742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.806840 containerd[1469]: time="2025-01-17T12:18:01.805277088Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809621870Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809715482Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809739855Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809766523Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809779789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809798742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809811728Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:18:01.813415 containerd[1469]: time="2025-01-17T12:18:01.809823733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:18:01.810950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:18:01.813905 containerd[1469]: time="2025-01-17T12:18:01.810175906Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:18:01.813905 containerd[1469]: time="2025-01-17T12:18:01.810268791Z" level=info msg="Connect containerd service" Jan 17 12:18:01.813905 containerd[1469]: time="2025-01-17T12:18:01.810340613Z" level=info msg="using legacy CRI server" Jan 17 12:18:01.818631 containerd[1469]: time="2025-01-17T12:18:01.815414671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:18:01.818631 containerd[1469]: time="2025-01-17T12:18:01.816623315Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:18:01.815608 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:18:01.827401 containerd[1469]: time="2025-01-17T12:18:01.826891472Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:18:01.827853 containerd[1469]: time="2025-01-17T12:18:01.827815809Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:18:01.828028 containerd[1469]: time="2025-01-17T12:18:01.828006062Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:18:01.828183 containerd[1469]: time="2025-01-17T12:18:01.828143903Z" level=info msg="Start subscribing containerd event" Jan 17 12:18:01.828332 containerd[1469]: time="2025-01-17T12:18:01.828312967Z" level=info msg="Start recovering state" Jan 17 12:18:01.828571 containerd[1469]: time="2025-01-17T12:18:01.828547023Z" level=info msg="Start event monitor" Jan 17 12:18:01.828683 containerd[1469]: time="2025-01-17T12:18:01.828662111Z" level=info msg="Start snapshots syncer" Jan 17 12:18:01.828754 containerd[1469]: time="2025-01-17T12:18:01.828735023Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:18:01.828824 containerd[1469]: time="2025-01-17T12:18:01.828806564Z" level=info msg="Start streaming server" Jan 17 12:18:01.828988 containerd[1469]: time="2025-01-17T12:18:01.828970047Z" level=info msg="containerd successfully booted in 0.287224s" Jan 17 12:18:01.829456 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:18:02.313062 tar[1457]: linux-amd64/LICENSE Jan 17 12:18:02.314306 tar[1457]: linux-amd64/README.md Jan 17 12:18:02.356646 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:18:03.090279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:03.092154 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:18:03.098911 systemd[1]: Startup finished in 1.582s (kernel) + 6.545s (initrd) + 7.815s (userspace) = 15.942s. Jan 17 12:18:03.108298 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:04.100626 kubelet[1556]: E0117 12:18:04.100523 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:04.104614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:04.104867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:04.105891 systemd[1]: kubelet.service: Consumed 1.544s CPU time. Jan 17 12:18:09.991047 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:18:09.998828 systemd[1]: Started sshd@0-209.38.133.237:22-139.178.68.195:39846.service - OpenSSH per-connection server daemon (139.178.68.195:39846). Jan 17 12:18:10.072312 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 39846 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:10.075982 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:10.089974 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:18:10.095895 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:18:10.101001 systemd-logind[1446]: New session 1 of user core. Jan 17 12:18:10.124219 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:18:10.134431 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:18:10.140144 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:18:10.273619 systemd[1573]: Queued start job for default target default.target. Jan 17 12:18:10.286326 systemd[1573]: Created slice app.slice - User Application Slice. Jan 17 12:18:10.286390 systemd[1573]: Reached target paths.target - Paths. Jan 17 12:18:10.286412 systemd[1573]: Reached target timers.target - Timers. Jan 17 12:18:10.288387 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:18:10.314036 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:18:10.314236 systemd[1573]: Reached target sockets.target - Sockets. Jan 17 12:18:10.314261 systemd[1573]: Reached target basic.target - Basic System. Jan 17 12:18:10.314333 systemd[1573]: Reached target default.target - Main User Target. Jan 17 12:18:10.314402 systemd[1573]: Startup finished in 164ms. Jan 17 12:18:10.314721 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:18:10.328721 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:18:10.402900 systemd[1]: Started sshd@1-209.38.133.237:22-139.178.68.195:39860.service - OpenSSH per-connection server daemon (139.178.68.195:39860). Jan 17 12:18:10.452053 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 39860 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:10.454290 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:10.460866 systemd-logind[1446]: New session 2 of user core. Jan 17 12:18:10.467736 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:18:10.532822 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:10.545702 systemd[1]: sshd@1-209.38.133.237:22-139.178.68.195:39860.service: Deactivated successfully. Jan 17 12:18:10.547864 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:18:10.550248 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:18:10.555821 systemd[1]: Started sshd@2-209.38.133.237:22-139.178.68.195:39870.service - OpenSSH per-connection server daemon (139.178.68.195:39870). Jan 17 12:18:10.557835 systemd-logind[1446]: Removed session 2. Jan 17 12:18:10.606400 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 39870 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:10.608673 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:10.618500 systemd-logind[1446]: New session 3 of user core. Jan 17 12:18:10.627802 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:18:10.688767 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:10.703571 systemd[1]: sshd@2-209.38.133.237:22-139.178.68.195:39870.service: Deactivated successfully. Jan 17 12:18:10.719524 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:18:10.723625 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:18:10.730900 systemd[1]: Started sshd@3-209.38.133.237:22-139.178.68.195:39876.service - OpenSSH per-connection server daemon (139.178.68.195:39876). Jan 17 12:18:10.733340 systemd-logind[1446]: Removed session 3. Jan 17 12:18:10.794570 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 39876 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:10.798528 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:10.806248 systemd-logind[1446]: New session 4 of user core. Jan 17 12:18:10.811734 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:18:10.887690 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:10.898126 systemd[1]: sshd@3-209.38.133.237:22-139.178.68.195:39876.service: Deactivated successfully. Jan 17 12:18:10.901152 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:18:10.903687 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:18:10.924262 systemd[1]: Started sshd@4-209.38.133.237:22-139.178.68.195:39886.service - OpenSSH per-connection server daemon (139.178.68.195:39886). Jan 17 12:18:10.929760 systemd-logind[1446]: Removed session 4. Jan 17 12:18:10.979859 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 39886 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:10.980559 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:10.987431 systemd-logind[1446]: New session 5 of user core. Jan 17 12:18:10.996694 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:18:11.079771 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:18:11.080255 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:11.096232 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:11.101212 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:11.111610 systemd[1]: sshd@4-209.38.133.237:22-139.178.68.195:39886.service: Deactivated successfully. Jan 17 12:18:11.113710 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:18:11.115519 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:18:11.122513 systemd[1]: Started sshd@5-209.38.133.237:22-139.178.68.195:39898.service - OpenSSH per-connection server daemon (139.178.68.195:39898). Jan 17 12:18:11.124988 systemd-logind[1446]: Removed session 5. Jan 17 12:18:11.171526 sshd[1613]: Accepted publickey for core from 139.178.68.195 port 39898 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:11.173914 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:11.182334 systemd-logind[1446]: New session 6 of user core. Jan 17 12:18:11.189779 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:18:11.251908 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:18:11.252367 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:11.258721 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:11.268565 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:18:11.269120 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:11.286834 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:11.300348 auditctl[1620]: No rules Jan 17 12:18:11.301007 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:18:11.301287 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:11.310495 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:11.351809 augenrules[1638]: No rules Jan 17 12:18:11.353562 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:11.354999 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:11.360600 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:11.369065 systemd[1]: sshd@5-209.38.133.237:22-139.178.68.195:39898.service: Deactivated successfully. Jan 17 12:18:11.372338 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:18:11.374574 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:18:11.379874 systemd[1]: Started sshd@6-209.38.133.237:22-139.178.68.195:39902.service - OpenSSH per-connection server daemon (139.178.68.195:39902). Jan 17 12:18:11.382194 systemd-logind[1446]: Removed session 6. Jan 17 12:18:11.450267 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 39902 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:11.454252 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:11.464721 systemd-logind[1446]: New session 7 of user core. Jan 17 12:18:11.477984 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:18:11.548452 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:18:11.548894 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:12.190858 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:18:12.193215 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:18:12.834760 dockerd[1666]: time="2025-01-17T12:18:12.833974912Z" level=info msg="Starting up" Jan 17 12:18:12.969816 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport18596985-merged.mount: Deactivated successfully. Jan 17 12:18:13.011955 dockerd[1666]: time="2025-01-17T12:18:13.011837964Z" level=info msg="Loading containers: start." Jan 17 12:18:13.196322 kernel: Initializing XFRM netlink socket Jan 17 12:18:13.232677 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:18:13.745885 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 17 12:18:13.746570 systemd-timesyncd[1346]: Contacted time server 198.30.92.2:123 (2.flatcar.pool.ntp.org). Jan 17 12:18:13.746683 systemd-timesyncd[1346]: Initial clock synchronization to Fri 2025-01-17 12:18:13.745654 UTC. Jan 17 12:18:13.773334 systemd-networkd[1371]: docker0: Link UP Jan 17 12:18:13.800138 dockerd[1666]: time="2025-01-17T12:18:13.799988171Z" level=info msg="Loading containers: done." Jan 17 12:18:13.825191 dockerd[1666]: time="2025-01-17T12:18:13.825089934Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:18:13.825437 dockerd[1666]: time="2025-01-17T12:18:13.825308098Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:18:13.825486 dockerd[1666]: time="2025-01-17T12:18:13.825462502Z" level=info msg="Daemon has completed initialization" Jan 17 12:18:13.875958 dockerd[1666]: time="2025-01-17T12:18:13.875793088Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:18:13.876790 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:18:14.803127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:18:14.824766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:15.053703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:15.057215 containerd[1469]: time="2025-01-17T12:18:15.056793407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:18:15.072124 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:15.180286 kubelet[1825]: E0117 12:18:15.180201 1825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:15.185073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:15.185320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:15.716763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018145459.mount: Deactivated successfully. Jan 17 12:18:17.382284 containerd[1469]: time="2025-01-17T12:18:17.381715041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.383842 containerd[1469]: time="2025-01-17T12:18:17.383740277Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 17 12:18:17.386218 containerd[1469]: time="2025-01-17T12:18:17.384700508Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.388521 containerd[1469]: time="2025-01-17T12:18:17.388472813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.390573 containerd[1469]: time="2025-01-17T12:18:17.390511716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.332633877s" Jan 17 12:18:17.390699 containerd[1469]: time="2025-01-17T12:18:17.390583481Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:18:17.423564 containerd[1469]: time="2025-01-17T12:18:17.423492803Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:18:19.443482 containerd[1469]: time="2025-01-17T12:18:19.443388451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.445372 containerd[1469]: time="2025-01-17T12:18:19.445303428Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 17 12:18:19.446232 containerd[1469]: time="2025-01-17T12:18:19.445878923Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.450419 containerd[1469]: time="2025-01-17T12:18:19.450357750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.452552 containerd[1469]: time="2025-01-17T12:18:19.452492940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.028937961s" Jan 17 12:18:19.452747 containerd[1469]: time="2025-01-17T12:18:19.452724940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:18:19.485216 containerd[1469]: time="2025-01-17T12:18:19.484869677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:18:19.898302 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:18:20.968631 containerd[1469]: time="2025-01-17T12:18:20.968562130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.970989 containerd[1469]: time="2025-01-17T12:18:20.970919006Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 17 12:18:20.971418 containerd[1469]: time="2025-01-17T12:18:20.971351425Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.975477 containerd[1469]: time="2025-01-17T12:18:20.975391398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.977108 containerd[1469]: time="2025-01-17T12:18:20.976911342Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.491977871s" Jan 17 12:18:20.977108 containerd[1469]: time="2025-01-17T12:18:20.976972173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:18:21.032035 containerd[1469]: time="2025-01-17T12:18:21.031939530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:18:22.541654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217737956.mount: Deactivated successfully. Jan 17 12:18:23.003578 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:18:23.332737 containerd[1469]: time="2025-01-17T12:18:23.331407032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:23.334331 containerd[1469]: time="2025-01-17T12:18:23.334244696Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 17 12:18:23.335544 containerd[1469]: time="2025-01-17T12:18:23.335461304Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:23.338879 containerd[1469]: time="2025-01-17T12:18:23.338825981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:23.340895 containerd[1469]: time="2025-01-17T12:18:23.340092854Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.308087858s" Jan 17 12:18:23.340895 containerd[1469]: time="2025-01-17T12:18:23.340190346Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:18:23.378451 containerd[1469]: time="2025-01-17T12:18:23.378398340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:18:24.024138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472961983.mount: Deactivated successfully. Jan 17 12:18:25.438464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:18:25.446602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:25.701307 containerd[1469]: time="2025-01-17T12:18:25.698290637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:25.701307 containerd[1469]: time="2025-01-17T12:18:25.700717313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:18:25.702569 containerd[1469]: time="2025-01-17T12:18:25.702513826Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:25.722230 containerd[1469]: time="2025-01-17T12:18:25.720488617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:25.722789 containerd[1469]: time="2025-01-17T12:18:25.722732877Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.344021384s" Jan 17 12:18:25.722789 containerd[1469]: time="2025-01-17T12:18:25.722795823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:18:25.747262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:25.758828 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:25.767387 containerd[1469]: time="2025-01-17T12:18:25.767301997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:18:25.852870 kubelet[1980]: E0117 12:18:25.852742 1980 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:25.856665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:25.856889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:26.122402 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:18:26.261505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050883133.mount: Deactivated successfully. Jan 17 12:18:26.268226 containerd[1469]: time="2025-01-17T12:18:26.267508367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.269782 containerd[1469]: time="2025-01-17T12:18:26.269426904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:18:26.271132 containerd[1469]: time="2025-01-17T12:18:26.270607578Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.274215 containerd[1469]: time="2025-01-17T12:18:26.274119278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.276266 containerd[1469]: time="2025-01-17T12:18:26.276112900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 508.750773ms" Jan 17 12:18:26.276266 containerd[1469]: time="2025-01-17T12:18:26.276204368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:18:26.313940 containerd[1469]: time="2025-01-17T12:18:26.313885024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:18:26.884031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198526758.mount: Deactivated successfully. Jan 17 12:18:29.177968 containerd[1469]: time="2025-01-17T12:18:29.176331909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:29.177968 containerd[1469]: time="2025-01-17T12:18:29.177888079Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 17 12:18:29.181189 containerd[1469]: time="2025-01-17T12:18:29.180462092Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:29.189001 containerd[1469]: time="2025-01-17T12:18:29.188920167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.874951086s" Jan 17 12:18:29.189297 containerd[1469]: time="2025-01-17T12:18:29.189272315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:18:29.190535 containerd[1469]: time="2025-01-17T12:18:29.190072441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:33.114010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:33.125671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:33.150113 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit session-7.scope)... Jan 17 12:18:33.150336 systemd[1]: Reloading... Jan 17 12:18:33.333266 zram_generator::config[2149]: No configuration found. Jan 17 12:18:33.490725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:33.578806 systemd[1]: Reloading finished in 427 ms. Jan 17 12:18:33.657263 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:18:33.657391 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:18:33.657730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:33.668717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:33.846488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:33.859888 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:33.935635 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:33.935635 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:33.935635 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:33.951213 kubelet[2202]: I0117 12:18:33.941498 2202 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:34.576685 kubelet[2202]: I0117 12:18:34.576591 2202 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:18:34.576685 kubelet[2202]: I0117 12:18:34.576648 2202 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:34.576965 kubelet[2202]: I0117 12:18:34.576947 2202 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:18:34.603651 kubelet[2202]: I0117 12:18:34.603430 2202 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:34.606143 kubelet[2202]: E0117 12:18:34.606011 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.133.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.622318 kubelet[2202]: I0117 12:18:34.622138 2202 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:34.623185 kubelet[2202]: I0117 12:18:34.622815 2202 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:34.623698 kubelet[2202]: I0117 12:18:34.622873 2202 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-8-018bcc3779","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:34.626200 kubelet[2202]: I0117 12:18:34.626087 2202 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:34.626200 kubelet[2202]: I0117 12:18:34.626151 2202 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:34.626903 kubelet[2202]: I0117 12:18:34.626656 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:34.627969 kubelet[2202]: I0117 12:18:34.627941 2202 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:18:34.628302 kubelet[2202]: I0117 12:18:34.628084 2202 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:34.628302 kubelet[2202]: I0117 12:18:34.628141 2202 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:34.628302 kubelet[2202]: I0117 12:18:34.628193 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:34.633524 kubelet[2202]: I0117 12:18:34.632899 2202 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:34.636200 kubelet[2202]: I0117 12:18:34.634816 2202 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:34.636200 kubelet[2202]: W0117 12:18:34.634947 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:18:34.636839 kubelet[2202]: I0117 12:18:34.636807 2202 server.go:1264] "Started kubelet" Jan 17 12:18:34.649870 kubelet[2202]: I0117 12:18:34.649747 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:34.652928 kubelet[2202]: E0117 12:18:34.652706 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.133.237:6443/api/v1/namespaces/default/events\": dial tcp 209.38.133.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-8-018bcc3779.181b7a15a375ce15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-8-018bcc3779,UID:ci-4081.3.0-8-018bcc3779,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-8-018bcc3779,},FirstTimestamp:2025-01-17 12:18:34.636766741 +0000 UTC m=+0.769889719,LastTimestamp:2025-01-17 12:18:34.636766741 +0000 UTC m=+0.769889719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-8-018bcc3779,}" Jan 17 12:18:34.654212 kubelet[2202]: W0117 12:18:34.653903 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.133.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-018bcc3779&limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.657223 kubelet[2202]: E0117 12:18:34.657150 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.133.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-018bcc3779&limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.660203 kubelet[2202]: I0117 12:18:34.660117 2202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:34.663339 kubelet[2202]: W0117 12:18:34.662774 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.133.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.663339 kubelet[2202]: E0117 12:18:34.662870 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.133.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.665073 kubelet[2202]: I0117 12:18:34.665026 2202 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:18:34.666557 kubelet[2202]: I0117 12:18:34.666335 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:34.666814 kubelet[2202]: I0117 12:18:34.666621 2202 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:34.669355 kubelet[2202]: I0117 12:18:34.669320 2202 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:34.670187 kubelet[2202]: I0117 12:18:34.669933 2202 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:18:34.670187 kubelet[2202]: I0117 12:18:34.670029 2202 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:18:34.671696 kubelet[2202]: W0117 12:18:34.670460 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.133.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.671696 kubelet[2202]: E0117 12:18:34.670522 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.133.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.671696 kubelet[2202]: E0117 12:18:34.670580 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.133.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-018bcc3779?timeout=10s\": dial tcp 209.38.133.237:6443: connect: connection refused" interval="200ms" Jan 17 12:18:34.673612 kubelet[2202]: I0117 12:18:34.673572 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:34.675540 kubelet[2202]: E0117 12:18:34.675508 2202 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:34.678185 kubelet[2202]: I0117 12:18:34.676316 2202 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:34.678185 kubelet[2202]: I0117 12:18:34.676340 2202 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:34.693421 kubelet[2202]: I0117 12:18:34.693350 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:34.696284 kubelet[2202]: I0117 12:18:34.696230 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:34.696510 kubelet[2202]: I0117 12:18:34.696496 2202 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:34.696619 kubelet[2202]: I0117 12:18:34.696606 2202 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:18:34.696769 kubelet[2202]: E0117 12:18:34.696741 2202 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:34.708092 kubelet[2202]: W0117 12:18:34.708011 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.133.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.710312 kubelet[2202]: E0117 12:18:34.710274 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.133.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:34.717266 kubelet[2202]: I0117 12:18:34.717230 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:34.717504 kubelet[2202]: I0117 12:18:34.717481 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:34.717621 kubelet[2202]: I0117 12:18:34.717607 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:34.721679 kubelet[2202]: I0117 12:18:34.721633 2202 policy_none.go:49] "None policy: Start" Jan 17 12:18:34.723007 kubelet[2202]: I0117 12:18:34.722973 2202 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:34.723289 kubelet[2202]: I0117 12:18:34.723018 2202 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:34.736244 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:18:34.760122 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:18:34.766866 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:18:34.772795 kubelet[2202]: I0117 12:18:34.772566 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.772795 kubelet[2202]: I0117 12:18:34.772685 2202 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:34.773647 kubelet[2202]: E0117 12:18:34.773362 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.133.237:6443/api/v1/nodes\": dial tcp 209.38.133.237:6443: connect: connection refused" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.774737 kubelet[2202]: I0117 12:18:34.774096 2202 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:18:34.774737 kubelet[2202]: I0117 12:18:34.774316 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:34.776597 kubelet[2202]: E0117 12:18:34.776571 2202 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-8-018bcc3779\" not found" Jan 17 12:18:34.800490 kubelet[2202]: I0117 12:18:34.797913 2202 topology_manager.go:215] "Topology Admit Handler" podUID="4d733c69c6d15c7eaf5b3ff7c4c3a720" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.801873 kubelet[2202]: I0117 12:18:34.801790 2202 topology_manager.go:215] "Topology Admit Handler" podUID="f28ddc85ad625346a7a1c35b1c705764" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.806700 kubelet[2202]: I0117 12:18:34.806466 2202 topology_manager.go:215] "Topology Admit Handler" podUID="838fa06ad36ed537296a08d6fb335e6a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.822254 systemd[1]: Created slice kubepods-burstable-podf28ddc85ad625346a7a1c35b1c705764.slice - libcontainer container kubepods-burstable-podf28ddc85ad625346a7a1c35b1c705764.slice. Jan 17 12:18:34.853525 systemd[1]: Created slice kubepods-burstable-pod4d733c69c6d15c7eaf5b3ff7c4c3a720.slice - libcontainer container kubepods-burstable-pod4d733c69c6d15c7eaf5b3ff7c4c3a720.slice. Jan 17 12:18:34.868772 systemd[1]: Created slice kubepods-burstable-pod838fa06ad36ed537296a08d6fb335e6a.slice - libcontainer container kubepods-burstable-pod838fa06ad36ed537296a08d6fb335e6a.slice. Jan 17 12:18:34.872315 kubelet[2202]: E0117 12:18:34.871892 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.133.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-018bcc3779?timeout=10s\": dial tcp 209.38.133.237:6443: connect: connection refused" interval="400ms" Jan 17 12:18:34.972993 kubelet[2202]: I0117 12:18:34.972868 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d733c69c6d15c7eaf5b3ff7c4c3a720-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-8-018bcc3779\" (UID: \"4d733c69c6d15c7eaf5b3ff7c4c3a720\") " pod="kube-system/kube-scheduler-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.972993 kubelet[2202]: I0117 12:18:34.972939 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.972993 kubelet[2202]: I0117 12:18:34.973020 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.972993 kubelet[2202]: I0117 12:18:34.973082 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.974232 kubelet[2202]: I0117 12:18:34.973118 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.974232 kubelet[2202]: I0117 12:18:34.973149 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.974232 kubelet[2202]: I0117 12:18:34.973207 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.974232 kubelet[2202]: I0117 12:18:34.973238 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.974232 kubelet[2202]: I0117 12:18:34.973269 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.976031 kubelet[2202]: I0117 12:18:34.975573 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:34.976173 kubelet[2202]: E0117 12:18:34.976119 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.133.237:6443/api/v1/nodes\": dial tcp 209.38.133.237:6443: connect: connection refused" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:35.148086 kubelet[2202]: E0117 12:18:35.147431 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:35.149342 containerd[1469]: time="2025-01-17T12:18:35.148760322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-8-018bcc3779,Uid:f28ddc85ad625346a7a1c35b1c705764,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:35.165774 kubelet[2202]: E0117 12:18:35.165720 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:35.166525 containerd[1469]: time="2025-01-17T12:18:35.166464290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-8-018bcc3779,Uid:4d733c69c6d15c7eaf5b3ff7c4c3a720,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:35.175968 kubelet[2202]: E0117 12:18:35.175725 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:35.177417 containerd[1469]: time="2025-01-17T12:18:35.176879855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-8-018bcc3779,Uid:838fa06ad36ed537296a08d6fb335e6a,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:35.273012 kubelet[2202]: E0117 12:18:35.272940 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.133.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-018bcc3779?timeout=10s\": dial tcp 209.38.133.237:6443: connect: connection refused" interval="800ms" Jan 17 12:18:35.378539 kubelet[2202]: I0117 12:18:35.378482 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:35.391809 kubelet[2202]: E0117 12:18:35.391742 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.133.237:6443/api/v1/nodes\": dial tcp 209.38.133.237:6443: connect: connection refused" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:35.588138 kubelet[2202]: W0117 12:18:35.587932 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.133.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:35.588840 kubelet[2202]: E0117 12:18:35.588758 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.133.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:35.677814 kubelet[2202]: W0117 12:18:35.677709 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.133.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-018bcc3779&limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:35.677953 kubelet[2202]: E0117 12:18:35.677855 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.133.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-018bcc3779&limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:35.740534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143353751.mount: Deactivated successfully. Jan 17 12:18:35.754251 containerd[1469]: time="2025-01-17T12:18:35.753878248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:35.755499 containerd[1469]: time="2025-01-17T12:18:35.755435105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:35.757308 containerd[1469]: time="2025-01-17T12:18:35.757231786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:35.757414 containerd[1469]: time="2025-01-17T12:18:35.757311898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:18:35.760193 containerd[1469]: time="2025-01-17T12:18:35.758545729Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:35.760193 containerd[1469]: time="2025-01-17T12:18:35.760049311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:35.760679 containerd[1469]: time="2025-01-17T12:18:35.760639307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:35.766208 containerd[1469]: time="2025-01-17T12:18:35.766117381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:35.767441 containerd[1469]: time="2025-01-17T12:18:35.767382778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.504746ms" Jan 17 12:18:35.771798 containerd[1469]: time="2025-01-17T12:18:35.769930453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.359041ms" Jan 17 12:18:35.773338 containerd[1469]: time="2025-01-17T12:18:35.773280988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.287534ms" Jan 17 12:18:36.022292 containerd[1469]: time="2025-01-17T12:18:36.020609034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:36.022292 containerd[1469]: time="2025-01-17T12:18:36.020749243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:36.022292 containerd[1469]: time="2025-01-17T12:18:36.020776762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.022292 containerd[1469]: time="2025-01-17T12:18:36.020925511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.034431 containerd[1469]: time="2025-01-17T12:18:36.033650482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:36.034431 containerd[1469]: time="2025-01-17T12:18:36.033764106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:36.034431 containerd[1469]: time="2025-01-17T12:18:36.033792472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.034431 containerd[1469]: time="2025-01-17T12:18:36.034012999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.051615 containerd[1469]: time="2025-01-17T12:18:36.051441930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:36.051615 containerd[1469]: time="2025-01-17T12:18:36.051545087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:36.051912 containerd[1469]: time="2025-01-17T12:18:36.051563493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.051912 containerd[1469]: time="2025-01-17T12:18:36.051701337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:36.077049 kubelet[2202]: E0117 12:18:36.076983 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.133.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-018bcc3779?timeout=10s\": dial tcp 209.38.133.237:6443: connect: connection refused" interval="1.6s" Jan 17 12:18:36.083647 systemd[1]: Started cri-containerd-cc29c9f981a1e8f8a4b9b30df98fa8a1cba6da698288380ec9370138993c4141.scope - libcontainer container cc29c9f981a1e8f8a4b9b30df98fa8a1cba6da698288380ec9370138993c4141. Jan 17 12:18:36.096646 systemd[1]: Started cri-containerd-eb0b9dd6a1b4ccbac4cd17ca9da539b2c043d9c064c6ea5e26d65ba73039a9e6.scope - libcontainer container eb0b9dd6a1b4ccbac4cd17ca9da539b2c043d9c064c6ea5e26d65ba73039a9e6. Jan 17 12:18:36.120731 systemd[1]: Started cri-containerd-44b814dd39a80d08d9a21dd5bc48ae05f48e44e6ae0c2a34a78b9d43104d54a7.scope - libcontainer container 44b814dd39a80d08d9a21dd5bc48ae05f48e44e6ae0c2a34a78b9d43104d54a7. Jan 17 12:18:36.194007 kubelet[2202]: I0117 12:18:36.193956 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:36.198238 kubelet[2202]: E0117 12:18:36.197556 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.133.237:6443/api/v1/nodes\": dial tcp 209.38.133.237:6443: connect: connection refused" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:36.233700 containerd[1469]: time="2025-01-17T12:18:36.233464370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-8-018bcc3779,Uid:4d733c69c6d15c7eaf5b3ff7c4c3a720,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0b9dd6a1b4ccbac4cd17ca9da539b2c043d9c064c6ea5e26d65ba73039a9e6\"" Jan 17 12:18:36.234296 kubelet[2202]: W0117 12:18:36.234096 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.133.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:36.235105 kubelet[2202]: E0117 12:18:36.234887 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.133.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:36.236553 containerd[1469]: time="2025-01-17T12:18:36.235812549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-8-018bcc3779,Uid:838fa06ad36ed537296a08d6fb335e6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc29c9f981a1e8f8a4b9b30df98fa8a1cba6da698288380ec9370138993c4141\"" Jan 17 12:18:36.237234 kubelet[2202]: E0117 12:18:36.237197 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.237985 kubelet[2202]: E0117 12:18:36.237935 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.245567 containerd[1469]: time="2025-01-17T12:18:36.245374131Z" level=info msg="CreateContainer within sandbox \"cc29c9f981a1e8f8a4b9b30df98fa8a1cba6da698288380ec9370138993c4141\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:18:36.245567 containerd[1469]: time="2025-01-17T12:18:36.245491845Z" level=info msg="CreateContainer within sandbox \"eb0b9dd6a1b4ccbac4cd17ca9da539b2c043d9c064c6ea5e26d65ba73039a9e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:18:36.248699 kubelet[2202]: W0117 12:18:36.248607 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.133.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:36.248699 kubelet[2202]: E0117 12:18:36.248695 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.133.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:36.252240 containerd[1469]: time="2025-01-17T12:18:36.252057620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-8-018bcc3779,Uid:f28ddc85ad625346a7a1c35b1c705764,Namespace:kube-system,Attempt:0,} returns sandbox id \"44b814dd39a80d08d9a21dd5bc48ae05f48e44e6ae0c2a34a78b9d43104d54a7\"" Jan 17 12:18:36.253882 kubelet[2202]: E0117 12:18:36.253719 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.260367 containerd[1469]: time="2025-01-17T12:18:36.260221441Z" level=info msg="CreateContainer within sandbox \"44b814dd39a80d08d9a21dd5bc48ae05f48e44e6ae0c2a34a78b9d43104d54a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:18:36.292005 containerd[1469]: time="2025-01-17T12:18:36.290724043Z" level=info msg="CreateContainer within sandbox \"eb0b9dd6a1b4ccbac4cd17ca9da539b2c043d9c064c6ea5e26d65ba73039a9e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08e9039a1a686574571ffa5c083bc60905cdee06259e4bd353e95d0e838dec10\"" Jan 17 12:18:36.292634 containerd[1469]: time="2025-01-17T12:18:36.292582919Z" level=info msg="StartContainer for \"08e9039a1a686574571ffa5c083bc60905cdee06259e4bd353e95d0e838dec10\"" Jan 17 12:18:36.299980 containerd[1469]: time="2025-01-17T12:18:36.299634942Z" level=info msg="CreateContainer within sandbox \"44b814dd39a80d08d9a21dd5bc48ae05f48e44e6ae0c2a34a78b9d43104d54a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdf352951091995e1cd0c14560dd3ac8529f97da998e3608cd79a22bf9e16ee5\"" Jan 17 12:18:36.302094 containerd[1469]: time="2025-01-17T12:18:36.302028326Z" level=info msg="CreateContainer within sandbox \"cc29c9f981a1e8f8a4b9b30df98fa8a1cba6da698288380ec9370138993c4141\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f89f135f525d533447a6d7d0f4f5d13d2885a6b097cfadc031fb1cb00d3c70db\"" Jan 17 12:18:36.304360 containerd[1469]: time="2025-01-17T12:18:36.304319959Z" level=info msg="StartContainer for \"f89f135f525d533447a6d7d0f4f5d13d2885a6b097cfadc031fb1cb00d3c70db\"" Jan 17 12:18:36.306030 containerd[1469]: time="2025-01-17T12:18:36.304686852Z" level=info msg="StartContainer for \"cdf352951091995e1cd0c14560dd3ac8529f97da998e3608cd79a22bf9e16ee5\"" Jan 17 12:18:36.347595 systemd[1]: Started cri-containerd-08e9039a1a686574571ffa5c083bc60905cdee06259e4bd353e95d0e838dec10.scope - libcontainer container 08e9039a1a686574571ffa5c083bc60905cdee06259e4bd353e95d0e838dec10. Jan 17 12:18:36.387497 systemd[1]: Started cri-containerd-cdf352951091995e1cd0c14560dd3ac8529f97da998e3608cd79a22bf9e16ee5.scope - libcontainer container cdf352951091995e1cd0c14560dd3ac8529f97da998e3608cd79a22bf9e16ee5. Jan 17 12:18:36.397815 systemd[1]: Started cri-containerd-f89f135f525d533447a6d7d0f4f5d13d2885a6b097cfadc031fb1cb00d3c70db.scope - libcontainer container f89f135f525d533447a6d7d0f4f5d13d2885a6b097cfadc031fb1cb00d3c70db. Jan 17 12:18:36.489777 containerd[1469]: time="2025-01-17T12:18:36.489586399Z" level=info msg="StartContainer for \"08e9039a1a686574571ffa5c083bc60905cdee06259e4bd353e95d0e838dec10\" returns successfully" Jan 17 12:18:36.520900 containerd[1469]: time="2025-01-17T12:18:36.520810417Z" level=info msg="StartContainer for \"cdf352951091995e1cd0c14560dd3ac8529f97da998e3608cd79a22bf9e16ee5\" returns successfully" Jan 17 12:18:36.536677 containerd[1469]: time="2025-01-17T12:18:36.536603533Z" level=info msg="StartContainer for \"f89f135f525d533447a6d7d0f4f5d13d2885a6b097cfadc031fb1cb00d3c70db\" returns successfully" Jan 17 12:18:36.687263 kubelet[2202]: E0117 12:18:36.686948 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.133.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.133.237:6443: connect: connection refused Jan 17 12:18:36.724222 kubelet[2202]: E0117 12:18:36.724043 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.736060 kubelet[2202]: E0117 12:18:36.735913 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.745855 kubelet[2202]: E0117 12:18:36.745709 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:37.747183 kubelet[2202]: E0117 12:18:37.747056 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:37.800182 kubelet[2202]: I0117 12:18:37.799920 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:38.751499 kubelet[2202]: E0117 12:18:38.751371 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.190406 kubelet[2202]: E0117 12:18:39.190219 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-8-018bcc3779\" not found" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:39.289196 kubelet[2202]: E0117 12:18:39.286888 2202 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-8-018bcc3779.181b7a15a375ce15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-8-018bcc3779,UID:ci-4081.3.0-8-018bcc3779,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-8-018bcc3779,},FirstTimestamp:2025-01-17 12:18:34.636766741 +0000 UTC m=+0.769889719,LastTimestamp:2025-01-17 12:18:34.636766741 +0000 UTC m=+0.769889719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-8-018bcc3779,}" Jan 17 12:18:39.345243 kubelet[2202]: E0117 12:18:39.344843 2202 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-8-018bcc3779.181b7a15a5c4a536 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-8-018bcc3779,UID:ci-4081.3.0-8-018bcc3779,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-8-018bcc3779,},FirstTimestamp:2025-01-17 12:18:34.675488054 +0000 UTC m=+0.808611033,LastTimestamp:2025-01-17 12:18:34.675488054 +0000 UTC m=+0.808611033,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-8-018bcc3779,}" Jan 17 12:18:39.361203 kubelet[2202]: I0117 12:18:39.359942 2202 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:39.651449 kubelet[2202]: I0117 12:18:39.650987 2202 apiserver.go:52] "Watching apiserver" Jan 17 12:18:39.670665 kubelet[2202]: I0117 12:18:39.670360 2202 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:18:40.213039 kubelet[2202]: W0117 12:18:40.212173 2202 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:40.213039 kubelet[2202]: E0117 12:18:40.212694 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:40.752270 kubelet[2202]: E0117 12:18:40.752038 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:41.821969 systemd[1]: Reloading requested from client PID 2473 ('systemctl') (unit session-7.scope)... Jan 17 12:18:41.821996 systemd[1]: Reloading... Jan 17 12:18:41.994383 zram_generator::config[2515]: No configuration found. Jan 17 12:18:42.099672 kubelet[2202]: W0117 12:18:42.099503 2202 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:42.102538 kubelet[2202]: E0117 12:18:42.101674 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:42.210960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:42.379105 systemd[1]: Reloading finished in 556 ms. Jan 17 12:18:42.450027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:42.463280 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:18:42.463963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:42.464308 systemd[1]: kubelet.service: Consumed 1.311s CPU time, 112.6M memory peak, 0B memory swap peak. Jan 17 12:18:42.484632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:42.657683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:42.676105 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:42.788134 kubelet[2563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:42.788134 kubelet[2563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:42.790457 kubelet[2563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:42.790457 kubelet[2563]: I0117 12:18:42.788849 2563 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:42.800189 kubelet[2563]: I0117 12:18:42.800033 2563 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:18:42.800189 kubelet[2563]: I0117 12:18:42.800070 2563 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:42.800462 kubelet[2563]: I0117 12:18:42.800417 2563 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:18:42.803555 kubelet[2563]: I0117 12:18:42.803473 2563 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:18:42.819199 kubelet[2563]: I0117 12:18:42.817987 2563 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:42.837186 kubelet[2563]: I0117 12:18:42.837066 2563 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:42.837503 kubelet[2563]: I0117 12:18:42.837461 2563 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:42.837808 kubelet[2563]: I0117 12:18:42.837509 2563 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-8-018bcc3779","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:42.838035 kubelet[2563]: I0117 12:18:42.837829 2563 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:42.838035 kubelet[2563]: I0117 12:18:42.837850 2563 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:42.838035 kubelet[2563]: I0117 12:18:42.837916 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:42.838210 kubelet[2563]: I0117 12:18:42.838078 2563 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:18:42.838210 kubelet[2563]: I0117 12:18:42.838097 2563 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:42.838210 kubelet[2563]: I0117 12:18:42.838130 2563 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:42.838210 kubelet[2563]: I0117 12:18:42.838151 2563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:42.846232 kubelet[2563]: I0117 12:18:42.844150 2563 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:42.846232 kubelet[2563]: I0117 12:18:42.844516 2563 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:42.846232 kubelet[2563]: I0117 12:18:42.845375 2563 server.go:1264] "Started kubelet" Jan 17 12:18:42.852043 kubelet[2563]: I0117 12:18:42.850462 2563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:42.865089 kubelet[2563]: I0117 12:18:42.862922 2563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:42.865089 kubelet[2563]: I0117 12:18:42.864718 2563 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:18:42.867937 kubelet[2563]: I0117 12:18:42.866338 2563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:42.867937 kubelet[2563]: I0117 12:18:42.866761 2563 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.870701 2563 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.873844 2563 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.874039 2563 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.877240 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.879775 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.879857 2563 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:42.880386 kubelet[2563]: I0117 12:18:42.879887 2563 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:18:42.880386 kubelet[2563]: E0117 12:18:42.879968 2563 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:42.893152 sudo[2577]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:18:42.894794 sudo[2577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:18:42.919316 kubelet[2563]: I0117 12:18:42.919018 2563 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:42.950246 kubelet[2563]: I0117 12:18:42.950131 2563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:42.966344 kubelet[2563]: I0117 12:18:42.964905 2563 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:42.978958 kubelet[2563]: E0117 12:18:42.978641 2563 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:42.980385 kubelet[2563]: E0117 12:18:42.980140 2563 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:18:42.985340 kubelet[2563]: E0117 12:18:42.985008 2563 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 17 12:18:43.016205 kubelet[2563]: I0117 12:18:43.013463 2563 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.053126 kubelet[2563]: I0117 12:18:43.050936 2563 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.053126 kubelet[2563]: I0117 12:18:43.051107 2563 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.120571 kubelet[2563]: I0117 12:18:43.120531 2563 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:43.120758 kubelet[2563]: I0117 12:18:43.120648 2563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:43.120758 kubelet[2563]: I0117 12:18:43.120740 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.120941 2563 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.120960 2563 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.120983 2563 policy_none.go:49] "None policy: Start" Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.121593 2563 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.121614 2563 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:43.122237 kubelet[2563]: I0117 12:18:43.121795 2563 state_mem.go:75] "Updated machine memory state" Jan 17 12:18:43.132687 kubelet[2563]: I0117 12:18:43.131350 2563 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:43.132687 kubelet[2563]: I0117 12:18:43.131957 2563 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:18:43.133136 kubelet[2563]: I0117 12:18:43.133099 2563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:43.181715 kubelet[2563]: I0117 12:18:43.180967 2563 topology_manager.go:215] "Topology Admit Handler" podUID="f28ddc85ad625346a7a1c35b1c705764" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.181715 kubelet[2563]: I0117 12:18:43.181116 2563 topology_manager.go:215] "Topology Admit Handler" podUID="838fa06ad36ed537296a08d6fb335e6a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.181715 kubelet[2563]: I0117 12:18:43.181571 2563 topology_manager.go:215] "Topology Admit Handler" podUID="4d733c69c6d15c7eaf5b3ff7c4c3a720" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.208860 kubelet[2563]: W0117 12:18:43.206887 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:43.208860 kubelet[2563]: W0117 12:18:43.207676 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:43.208860 kubelet[2563]: E0117 12:18:43.207797 2563 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-8-018bcc3779\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.208860 kubelet[2563]: W0117 12:18:43.207882 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:43.208860 kubelet[2563]: E0117 12:18:43.207921 2563 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303480 kubelet[2563]: I0117 12:18:43.303404 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303480 kubelet[2563]: I0117 12:18:43.303484 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303731 kubelet[2563]: I0117 12:18:43.303520 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303731 kubelet[2563]: I0117 12:18:43.303548 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303731 kubelet[2563]: I0117 12:18:43.303579 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d733c69c6d15c7eaf5b3ff7c4c3a720-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-8-018bcc3779\" (UID: \"4d733c69c6d15c7eaf5b3ff7c4c3a720\") " pod="kube-system/kube-scheduler-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303731 kubelet[2563]: I0117 12:18:43.303596 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303731 kubelet[2563]: I0117 12:18:43.303610 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303981 kubelet[2563]: I0117 12:18:43.303655 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f28ddc85ad625346a7a1c35b1c705764-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" (UID: \"f28ddc85ad625346a7a1c35b1c705764\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.303981 kubelet[2563]: I0117 12:18:43.303696 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/838fa06ad36ed537296a08d6fb335e6a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-018bcc3779\" (UID: \"838fa06ad36ed537296a08d6fb335e6a\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:43.510622 kubelet[2563]: E0117 12:18:43.510349 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:43.514537 kubelet[2563]: E0117 12:18:43.513772 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:43.514537 kubelet[2563]: E0117 12:18:43.514289 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:43.850958 kubelet[2563]: I0117 12:18:43.849114 2563 apiserver.go:52] "Watching apiserver" Jan 17 12:18:43.874953 kubelet[2563]: I0117 12:18:43.874870 2563 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:18:44.061497 sudo[2577]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:44.079997 kubelet[2563]: E0117 12:18:44.079814 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:44.083599 kubelet[2563]: E0117 12:18:44.083326 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:44.107597 kubelet[2563]: W0117 12:18:44.107443 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:44.107597 kubelet[2563]: E0117 12:18:44.107553 2563 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-8-018bcc3779\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" Jan 17 12:18:44.109205 kubelet[2563]: E0117 12:18:44.108188 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:44.175029 kubelet[2563]: I0117 12:18:44.174712 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-8-018bcc3779" podStartSLOduration=4.174686473 podStartE2EDuration="4.174686473s" podCreationTimestamp="2025-01-17 12:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:44.148702038 +0000 UTC m=+1.463886166" watchObservedRunningTime="2025-01-17 12:18:44.174686473 +0000 UTC m=+1.489870601" Jan 17 12:18:44.197823 kubelet[2563]: I0117 12:18:44.197410 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-8-018bcc3779" podStartSLOduration=1.197383848 podStartE2EDuration="1.197383848s" podCreationTimestamp="2025-01-17 12:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:44.17496377 +0000 UTC m=+1.490147894" watchObservedRunningTime="2025-01-17 12:18:44.197383848 +0000 UTC m=+1.512567976" Jan 17 12:18:45.087144 kubelet[2563]: E0117 12:18:45.084055 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:45.087144 kubelet[2563]: E0117 12:18:45.086257 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:45.605405 update_engine[1450]: I20250117 12:18:45.605254 1450 update_attempter.cc:509] Updating boot flags... Jan 17 12:18:45.663248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2616) Jan 17 12:18:46.216095 kubelet[2563]: E0117 12:18:46.215969 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:46.246350 kubelet[2563]: I0117 12:18:46.246014 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-8-018bcc3779" podStartSLOduration=4.245993135 podStartE2EDuration="4.245993135s" podCreationTimestamp="2025-01-17 12:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:44.198926578 +0000 UTC m=+1.514110707" watchObservedRunningTime="2025-01-17 12:18:46.245993135 +0000 UTC m=+3.561177258" Jan 17 12:18:46.828741 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:46.832969 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:46.837889 systemd[1]: sshd@6-209.38.133.237:22-139.178.68.195:39902.service: Deactivated successfully. Jan 17 12:18:46.841744 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:18:46.842567 systemd[1]: session-7.scope: Consumed 7.837s CPU time, 187.8M memory peak, 0B memory swap peak. Jan 17 12:18:46.846112 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:18:46.847695 systemd-logind[1446]: Removed session 7. Jan 17 12:18:47.089944 kubelet[2563]: E0117 12:18:47.089778 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:50.172617 kubelet[2563]: E0117 12:18:50.172562 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:51.101528 kubelet[2563]: E0117 12:18:51.100968 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.135868 kubelet[2563]: E0117 12:18:54.134997 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:55.548516 kubelet[2563]: I0117 12:18:55.548471 2563 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:18:55.550971 containerd[1469]: time="2025-01-17T12:18:55.550547709Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:18:55.552381 kubelet[2563]: I0117 12:18:55.550889 2563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:18:56.164201 kubelet[2563]: I0117 12:18:56.161967 2563 topology_manager.go:215] "Topology Admit Handler" podUID="e939a2c4-47a0-4039-8db5-25e0ea79b80e" podNamespace="kube-system" podName="kube-proxy-m6fmh" Jan 17 12:18:56.175833 systemd[1]: Created slice kubepods-besteffort-pode939a2c4_47a0_4039_8db5_25e0ea79b80e.slice - libcontainer container kubepods-besteffort-pode939a2c4_47a0_4039_8db5_25e0ea79b80e.slice. Jan 17 12:18:56.195192 kubelet[2563]: I0117 12:18:56.194234 2563 topology_manager.go:215] "Topology Admit Handler" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" podNamespace="kube-system" podName="cilium-p7vx4" Jan 17 12:18:56.207415 systemd[1]: Created slice kubepods-burstable-pod2982d53b_908f_43a9_a46b_8b9e9f1749f8.slice - libcontainer container kubepods-burstable-pod2982d53b_908f_43a9_a46b_8b9e9f1749f8.slice. Jan 17 12:18:56.269919 kubelet[2563]: I0117 12:18:56.269684 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e939a2c4-47a0-4039-8db5-25e0ea79b80e-kube-proxy\") pod \"kube-proxy-m6fmh\" (UID: \"e939a2c4-47a0-4039-8db5-25e0ea79b80e\") " pod="kube-system/kube-proxy-m6fmh" Jan 17 12:18:56.269919 kubelet[2563]: I0117 12:18:56.269742 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e939a2c4-47a0-4039-8db5-25e0ea79b80e-xtables-lock\") pod \"kube-proxy-m6fmh\" (UID: \"e939a2c4-47a0-4039-8db5-25e0ea79b80e\") " pod="kube-system/kube-proxy-m6fmh" Jan 17 12:18:56.269919 kubelet[2563]: I0117 12:18:56.269761 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e939a2c4-47a0-4039-8db5-25e0ea79b80e-lib-modules\") pod \"kube-proxy-m6fmh\" (UID: \"e939a2c4-47a0-4039-8db5-25e0ea79b80e\") " pod="kube-system/kube-proxy-m6fmh" Jan 17 12:18:56.269919 kubelet[2563]: I0117 12:18:56.269780 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4nz\" (UniqueName: \"kubernetes.io/projected/e939a2c4-47a0-4039-8db5-25e0ea79b80e-kube-api-access-zn4nz\") pod \"kube-proxy-m6fmh\" (UID: \"e939a2c4-47a0-4039-8db5-25e0ea79b80e\") " pod="kube-system/kube-proxy-m6fmh" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371029 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-kernel\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371116 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cni-path\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371147 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-etc-cni-netd\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371212 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-run\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371235 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-bpf-maps\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.371908 kubelet[2563]: I0117 12:18:56.371259 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-lib-modules\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.372321 kubelet[2563]: I0117 12:18:56.371282 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-xtables-lock\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.372321 kubelet[2563]: I0117 12:18:56.371310 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjr7\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.372321 kubelet[2563]: I0117 12:18:56.371513 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-cgroup\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.372321 kubelet[2563]: I0117 12:18:56.371543 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-config-path\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.375659 kubelet[2563]: I0117 12:18:56.374659 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-net\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.375659 kubelet[2563]: I0117 12:18:56.374769 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2982d53b-908f-43a9-a46b-8b9e9f1749f8-clustermesh-secrets\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.375659 kubelet[2563]: I0117 12:18:56.374809 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hubble-tls\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.375659 kubelet[2563]: I0117 12:18:56.374947 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hostproc\") pod \"cilium-p7vx4\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " pod="kube-system/cilium-p7vx4" Jan 17 12:18:56.396940 kubelet[2563]: E0117 12:18:56.396810 2563 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:18:56.398082 kubelet[2563]: E0117 12:18:56.396957 2563 projected.go:200] Error preparing data for projected volume kube-api-access-zn4nz for pod kube-system/kube-proxy-m6fmh: configmap "kube-root-ca.crt" not found Jan 17 12:18:56.398082 kubelet[2563]: E0117 12:18:56.397125 2563 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e939a2c4-47a0-4039-8db5-25e0ea79b80e-kube-api-access-zn4nz podName:e939a2c4-47a0-4039-8db5-25e0ea79b80e nodeName:}" failed. No retries permitted until 2025-01-17 12:18:56.897055435 +0000 UTC m=+14.212239555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zn4nz" (UniqueName: "kubernetes.io/projected/e939a2c4-47a0-4039-8db5-25e0ea79b80e-kube-api-access-zn4nz") pod "kube-proxy-m6fmh" (UID: "e939a2c4-47a0-4039-8db5-25e0ea79b80e") : configmap "kube-root-ca.crt" not found Jan 17 12:18:56.507941 kubelet[2563]: E0117 12:18:56.507826 2563 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:18:56.507941 kubelet[2563]: E0117 12:18:56.507876 2563 projected.go:200] Error preparing data for projected volume kube-api-access-vxjr7 for pod kube-system/cilium-p7vx4: configmap "kube-root-ca.crt" not found Jan 17 12:18:56.511528 kubelet[2563]: E0117 12:18:56.507956 2563 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7 podName:2982d53b-908f-43a9-a46b-8b9e9f1749f8 nodeName:}" failed. No retries permitted until 2025-01-17 12:18:57.007927972 +0000 UTC m=+14.323112090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vxjr7" (UniqueName: "kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7") pod "cilium-p7vx4" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8") : configmap "kube-root-ca.crt" not found Jan 17 12:18:56.693598 kubelet[2563]: I0117 12:18:56.693397 2563 topology_manager.go:215] "Topology Admit Handler" podUID="92b9811b-054a-48ee-8dfa-a704ac286526" podNamespace="kube-system" podName="cilium-operator-599987898-vrrtr" Jan 17 12:18:56.724200 systemd[1]: Created slice kubepods-besteffort-pod92b9811b_054a_48ee_8dfa_a704ac286526.slice - libcontainer container kubepods-besteffort-pod92b9811b_054a_48ee_8dfa_a704ac286526.slice. Jan 17 12:18:56.879845 kubelet[2563]: I0117 12:18:56.879602 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkxg\" (UniqueName: \"kubernetes.io/projected/92b9811b-054a-48ee-8dfa-a704ac286526-kube-api-access-rqkxg\") pod \"cilium-operator-599987898-vrrtr\" (UID: \"92b9811b-054a-48ee-8dfa-a704ac286526\") " pod="kube-system/cilium-operator-599987898-vrrtr" Jan 17 12:18:56.879845 kubelet[2563]: I0117 12:18:56.879689 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b9811b-054a-48ee-8dfa-a704ac286526-cilium-config-path\") pod \"cilium-operator-599987898-vrrtr\" (UID: \"92b9811b-054a-48ee-8dfa-a704ac286526\") " pod="kube-system/cilium-operator-599987898-vrrtr" Jan 17 12:18:57.031305 kubelet[2563]: E0117 12:18:57.031238 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.033570 containerd[1469]: time="2025-01-17T12:18:57.033503669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vrrtr,Uid:92b9811b-054a-48ee-8dfa-a704ac286526,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:57.081214 containerd[1469]: time="2025-01-17T12:18:57.080926807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:57.081214 containerd[1469]: time="2025-01-17T12:18:57.081049584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:57.081214 containerd[1469]: time="2025-01-17T12:18:57.081078756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.081636 containerd[1469]: time="2025-01-17T12:18:57.081318726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.084724 kubelet[2563]: E0117 12:18:57.084486 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.089692 containerd[1469]: time="2025-01-17T12:18:57.087661665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6fmh,Uid:e939a2c4-47a0-4039-8db5-25e0ea79b80e,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:57.111944 kubelet[2563]: E0117 12:18:57.111887 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.117906 containerd[1469]: time="2025-01-17T12:18:57.117370450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7vx4,Uid:2982d53b-908f-43a9-a46b-8b9e9f1749f8,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:57.130098 systemd[1]: Started cri-containerd-9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550.scope - libcontainer container 9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550. Jan 17 12:18:57.167221 containerd[1469]: time="2025-01-17T12:18:57.166465256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:57.167221 containerd[1469]: time="2025-01-17T12:18:57.166569207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:57.167221 containerd[1469]: time="2025-01-17T12:18:57.166613690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.167221 containerd[1469]: time="2025-01-17T12:18:57.166771043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.206530 systemd[1]: Started cri-containerd-f7e3c7a6f09d9c1ac6a35478be46e701231c34e9e8df8cee953fa57faf4375fc.scope - libcontainer container f7e3c7a6f09d9c1ac6a35478be46e701231c34e9e8df8cee953fa57faf4375fc. Jan 17 12:18:57.210356 containerd[1469]: time="2025-01-17T12:18:57.209992632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:57.210356 containerd[1469]: time="2025-01-17T12:18:57.210092206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:57.210356 containerd[1469]: time="2025-01-17T12:18:57.210110194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.210596 containerd[1469]: time="2025-01-17T12:18:57.210280562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:57.257129 systemd[1]: Started cri-containerd-590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2.scope - libcontainer container 590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2. Jan 17 12:18:57.290815 containerd[1469]: time="2025-01-17T12:18:57.290489040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vrrtr,Uid:92b9811b-054a-48ee-8dfa-a704ac286526,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\"" Jan 17 12:18:57.293484 kubelet[2563]: E0117 12:18:57.293439 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.311804 containerd[1469]: time="2025-01-17T12:18:57.310465036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:18:57.322270 containerd[1469]: time="2025-01-17T12:18:57.322190383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6fmh,Uid:e939a2c4-47a0-4039-8db5-25e0ea79b80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7e3c7a6f09d9c1ac6a35478be46e701231c34e9e8df8cee953fa57faf4375fc\"" Jan 17 12:18:57.323462 kubelet[2563]: E0117 12:18:57.323073 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.336890 containerd[1469]: time="2025-01-17T12:18:57.336072616Z" level=info msg="CreateContainer within sandbox \"f7e3c7a6f09d9c1ac6a35478be46e701231c34e9e8df8cee953fa57faf4375fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:18:57.358502 containerd[1469]: time="2025-01-17T12:18:57.358419841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7vx4,Uid:2982d53b-908f-43a9-a46b-8b9e9f1749f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\"" Jan 17 12:18:57.359939 kubelet[2563]: E0117 12:18:57.359910 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.369015 containerd[1469]: time="2025-01-17T12:18:57.368935773Z" level=info msg="CreateContainer within sandbox \"f7e3c7a6f09d9c1ac6a35478be46e701231c34e9e8df8cee953fa57faf4375fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9523859ba04db6520b2f7fc853b0595a564a78613c38897112eeee9f9d1a66a2\"" Jan 17 12:18:57.370333 containerd[1469]: time="2025-01-17T12:18:57.369730971Z" level=info msg="StartContainer for \"9523859ba04db6520b2f7fc853b0595a564a78613c38897112eeee9f9d1a66a2\"" Jan 17 12:18:57.412490 systemd[1]: Started cri-containerd-9523859ba04db6520b2f7fc853b0595a564a78613c38897112eeee9f9d1a66a2.scope - libcontainer container 9523859ba04db6520b2f7fc853b0595a564a78613c38897112eeee9f9d1a66a2. Jan 17 12:18:57.464317 containerd[1469]: time="2025-01-17T12:18:57.464243709Z" level=info msg="StartContainer for \"9523859ba04db6520b2f7fc853b0595a564a78613c38897112eeee9f9d1a66a2\" returns successfully" Jan 17 12:18:58.142950 kubelet[2563]: E0117 12:18:58.142523 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:58.163139 kubelet[2563]: I0117 12:18:58.163058 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m6fmh" podStartSLOduration=2.163018705 podStartE2EDuration="2.163018705s" podCreationTimestamp="2025-01-17 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:58.162847332 +0000 UTC m=+15.478031450" watchObservedRunningTime="2025-01-17 12:18:58.163018705 +0000 UTC m=+15.478202840" Jan 17 12:19:00.485782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414702228.mount: Deactivated successfully. Jan 17 12:19:01.427986 containerd[1469]: time="2025-01-17T12:19:01.427911335Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:01.433695 containerd[1469]: time="2025-01-17T12:19:01.433562689Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Jan 17 12:19:01.437654 containerd[1469]: time="2025-01-17T12:19:01.435445347Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:01.439903 containerd[1469]: time="2025-01-17T12:19:01.439487274Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.12894858s" Jan 17 12:19:01.439903 containerd[1469]: time="2025-01-17T12:19:01.439565078Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:19:01.443899 containerd[1469]: time="2025-01-17T12:19:01.443424767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:19:01.451500 containerd[1469]: time="2025-01-17T12:19:01.450295900Z" level=info msg="CreateContainer within sandbox \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:19:01.493707 containerd[1469]: time="2025-01-17T12:19:01.493645086Z" level=info msg="CreateContainer within sandbox \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\"" Jan 17 12:19:01.495149 containerd[1469]: time="2025-01-17T12:19:01.495090739Z" level=info msg="StartContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\"" Jan 17 12:19:01.598506 systemd[1]: Started cri-containerd-4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546.scope - libcontainer container 4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546. Jan 17 12:19:01.723925 containerd[1469]: time="2025-01-17T12:19:01.723369174Z" level=info msg="StartContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" returns successfully" Jan 17 12:19:02.169541 kubelet[2563]: E0117 12:19:02.169278 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:02.279576 kubelet[2563]: I0117 12:19:02.278114 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-vrrtr" podStartSLOduration=2.140268391 podStartE2EDuration="6.278080276s" podCreationTimestamp="2025-01-17 12:18:56 +0000 UTC" firstStartedPulling="2025-01-17 12:18:57.303497476 +0000 UTC m=+14.618681575" lastFinishedPulling="2025-01-17 12:19:01.441309342 +0000 UTC m=+18.756493460" observedRunningTime="2025-01-17 12:19:02.255878865 +0000 UTC m=+19.571062993" watchObservedRunningTime="2025-01-17 12:19:02.278080276 +0000 UTC m=+19.593264412" Jan 17 12:19:03.191786 kubelet[2563]: E0117 12:19:03.191710 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:11.278876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422096083.mount: Deactivated successfully. Jan 17 12:19:15.176942 containerd[1469]: time="2025-01-17T12:19:15.176821154Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.180734 containerd[1469]: time="2025-01-17T12:19:15.180563457Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735367" Jan 17 12:19:15.182200 containerd[1469]: time="2025-01-17T12:19:15.181912263Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.185006 containerd[1469]: time="2025-01-17T12:19:15.184951093Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.741450988s" Jan 17 12:19:15.185313 containerd[1469]: time="2025-01-17T12:19:15.185209286Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:19:15.200569 containerd[1469]: time="2025-01-17T12:19:15.200059703Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:19:15.341041 containerd[1469]: time="2025-01-17T12:19:15.339946404Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\"" Jan 17 12:19:15.345521 containerd[1469]: time="2025-01-17T12:19:15.341794323Z" level=info msg="StartContainer for \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\"" Jan 17 12:19:15.592504 systemd[1]: Started cri-containerd-0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae.scope - libcontainer container 0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae. Jan 17 12:19:15.641336 containerd[1469]: time="2025-01-17T12:19:15.641099086Z" level=info msg="StartContainer for \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\" returns successfully" Jan 17 12:19:15.664852 systemd[1]: cri-containerd-0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae.scope: Deactivated successfully. Jan 17 12:19:15.838729 containerd[1469]: time="2025-01-17T12:19:15.807116554Z" level=info msg="shim disconnected" id=0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae namespace=k8s.io Jan 17 12:19:15.838729 containerd[1469]: time="2025-01-17T12:19:15.838722737Z" level=warning msg="cleaning up after shim disconnected" id=0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae namespace=k8s.io Jan 17 12:19:15.839374 containerd[1469]: time="2025-01-17T12:19:15.838750933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:16.233309 kubelet[2563]: E0117 12:19:16.232711 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:16.241804 containerd[1469]: time="2025-01-17T12:19:16.241720326Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:19:16.288536 containerd[1469]: time="2025-01-17T12:19:16.288466532Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\"" Jan 17 12:19:16.294463 containerd[1469]: time="2025-01-17T12:19:16.293284530Z" level=info msg="StartContainer for \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\"" Jan 17 12:19:16.332779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae-rootfs.mount: Deactivated successfully. Jan 17 12:19:16.354570 systemd[1]: Started cri-containerd-07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1.scope - libcontainer container 07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1. Jan 17 12:19:16.404771 containerd[1469]: time="2025-01-17T12:19:16.404688161Z" level=info msg="StartContainer for \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\" returns successfully" Jan 17 12:19:16.424537 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:19:16.424919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:16.425034 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:16.432761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:16.433110 systemd[1]: cri-containerd-07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1.scope: Deactivated successfully. Jan 17 12:19:16.479975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1-rootfs.mount: Deactivated successfully. Jan 17 12:19:16.487515 containerd[1469]: time="2025-01-17T12:19:16.486862280Z" level=info msg="shim disconnected" id=07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1 namespace=k8s.io Jan 17 12:19:16.487515 containerd[1469]: time="2025-01-17T12:19:16.487093878Z" level=warning msg="cleaning up after shim disconnected" id=07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1 namespace=k8s.io Jan 17 12:19:16.487515 containerd[1469]: time="2025-01-17T12:19:16.487113955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:16.499862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:17.235300 kubelet[2563]: E0117 12:19:17.235251 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:17.241478 containerd[1469]: time="2025-01-17T12:19:17.240303604Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:19:17.276989 containerd[1469]: time="2025-01-17T12:19:17.276860736Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\"" Jan 17 12:19:17.280667 containerd[1469]: time="2025-01-17T12:19:17.278438011Z" level=info msg="StartContainer for \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\"" Jan 17 12:19:17.344545 systemd[1]: Started cri-containerd-f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f.scope - libcontainer container f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f. Jan 17 12:19:17.390274 containerd[1469]: time="2025-01-17T12:19:17.390193055Z" level=info msg="StartContainer for \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\" returns successfully" Jan 17 12:19:17.401522 systemd[1]: cri-containerd-f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f.scope: Deactivated successfully. Jan 17 12:19:17.443089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f-rootfs.mount: Deactivated successfully. Jan 17 12:19:17.451009 containerd[1469]: time="2025-01-17T12:19:17.450668904Z" level=info msg="shim disconnected" id=f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f namespace=k8s.io Jan 17 12:19:17.451009 containerd[1469]: time="2025-01-17T12:19:17.450752679Z" level=warning msg="cleaning up after shim disconnected" id=f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f namespace=k8s.io Jan 17 12:19:17.451009 containerd[1469]: time="2025-01-17T12:19:17.450765516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:18.240430 kubelet[2563]: E0117 12:19:18.240384 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:18.244334 containerd[1469]: time="2025-01-17T12:19:18.243408416Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:19:18.268549 containerd[1469]: time="2025-01-17T12:19:18.268477677Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\"" Jan 17 12:19:18.269466 containerd[1469]: time="2025-01-17T12:19:18.269405902Z" level=info msg="StartContainer for \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\"" Jan 17 12:19:18.315855 systemd[1]: Started cri-containerd-71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246.scope - libcontainer container 71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246. Jan 17 12:19:18.358920 systemd[1]: cri-containerd-71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246.scope: Deactivated successfully. Jan 17 12:19:18.366648 containerd[1469]: time="2025-01-17T12:19:18.365841645Z" level=info msg="StartContainer for \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\" returns successfully" Jan 17 12:19:18.417340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246-rootfs.mount: Deactivated successfully. Jan 17 12:19:18.422200 containerd[1469]: time="2025-01-17T12:19:18.421974965Z" level=info msg="shim disconnected" id=71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246 namespace=k8s.io Jan 17 12:19:18.422200 containerd[1469]: time="2025-01-17T12:19:18.422127363Z" level=warning msg="cleaning up after shim disconnected" id=71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246 namespace=k8s.io Jan 17 12:19:18.422200 containerd[1469]: time="2025-01-17T12:19:18.422139660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:19.249514 kubelet[2563]: E0117 12:19:19.246737 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:19.255403 containerd[1469]: time="2025-01-17T12:19:19.255331525Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:19:19.286764 containerd[1469]: time="2025-01-17T12:19:19.286553444Z" level=info msg="CreateContainer within sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\"" Jan 17 12:19:19.288771 containerd[1469]: time="2025-01-17T12:19:19.287932613Z" level=info msg="StartContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\"" Jan 17 12:19:19.371536 systemd[1]: Started cri-containerd-9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c.scope - libcontainer container 9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c. Jan 17 12:19:19.418092 systemd[1]: run-containerd-runc-k8s.io-9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c-runc.pDTDdN.mount: Deactivated successfully. Jan 17 12:19:19.427493 containerd[1469]: time="2025-01-17T12:19:19.427252438Z" level=info msg="StartContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" returns successfully" Jan 17 12:19:19.618779 kubelet[2563]: I0117 12:19:19.617108 2563 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:19:19.661876 kubelet[2563]: I0117 12:19:19.661121 2563 topology_manager.go:215] "Topology Admit Handler" podUID="5269cdeb-36e5-41d0-83cf-b247edad2306" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8wj85" Jan 17 12:19:19.671044 kubelet[2563]: I0117 12:19:19.669562 2563 topology_manager.go:215] "Topology Admit Handler" podUID="7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wv7fz" Jan 17 12:19:19.675376 systemd[1]: Created slice kubepods-burstable-pod5269cdeb_36e5_41d0_83cf_b247edad2306.slice - libcontainer container kubepods-burstable-pod5269cdeb_36e5_41d0_83cf_b247edad2306.slice. Jan 17 12:19:19.682112 kubelet[2563]: W0117 12:19:19.682053 2563 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.0-8-018bcc3779" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-8-018bcc3779' and this object Jan 17 12:19:19.682112 kubelet[2563]: E0117 12:19:19.682111 2563 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.0-8-018bcc3779" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-8-018bcc3779' and this object Jan 17 12:19:19.692061 systemd[1]: Created slice kubepods-burstable-pod7bf8b93a_54d9_47f3_9a02_aa7e3c8ddcb2.slice - libcontainer container kubepods-burstable-pod7bf8b93a_54d9_47f3_9a02_aa7e3c8ddcb2.slice. Jan 17 12:19:19.741828 kubelet[2563]: I0117 12:19:19.741345 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2-config-volume\") pod \"coredns-7db6d8ff4d-wv7fz\" (UID: \"7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2\") " pod="kube-system/coredns-7db6d8ff4d-wv7fz" Jan 17 12:19:19.741828 kubelet[2563]: I0117 12:19:19.741405 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmx9s\" (UniqueName: \"kubernetes.io/projected/7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2-kube-api-access-mmx9s\") pod \"coredns-7db6d8ff4d-wv7fz\" (UID: \"7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2\") " pod="kube-system/coredns-7db6d8ff4d-wv7fz" Jan 17 12:19:19.741828 kubelet[2563]: I0117 12:19:19.741434 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5269cdeb-36e5-41d0-83cf-b247edad2306-config-volume\") pod \"coredns-7db6d8ff4d-8wj85\" (UID: \"5269cdeb-36e5-41d0-83cf-b247edad2306\") " pod="kube-system/coredns-7db6d8ff4d-8wj85" Jan 17 12:19:19.741828 kubelet[2563]: I0117 12:19:19.741462 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2kl\" (UniqueName: \"kubernetes.io/projected/5269cdeb-36e5-41d0-83cf-b247edad2306-kube-api-access-ps2kl\") pod \"coredns-7db6d8ff4d-8wj85\" (UID: \"5269cdeb-36e5-41d0-83cf-b247edad2306\") " pod="kube-system/coredns-7db6d8ff4d-8wj85" Jan 17 12:19:20.266941 kubelet[2563]: E0117 12:19:20.266401 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:20.886103 kubelet[2563]: E0117 12:19:20.885752 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:20.887098 containerd[1469]: time="2025-01-17T12:19:20.886622512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wj85,Uid:5269cdeb-36e5-41d0-83cf-b247edad2306,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:20.898808 kubelet[2563]: E0117 12:19:20.897698 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:20.899081 containerd[1469]: time="2025-01-17T12:19:20.898450352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv7fz,Uid:7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:21.261681 kubelet[2563]: E0117 12:19:21.261635 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:21.861735 systemd-networkd[1371]: cilium_host: Link UP Jan 17 12:19:21.861998 systemd-networkd[1371]: cilium_net: Link UP Jan 17 12:19:21.862004 systemd-networkd[1371]: cilium_net: Gained carrier Jan 17 12:19:21.865452 systemd-networkd[1371]: cilium_host: Gained carrier Jan 17 12:19:21.865743 systemd-networkd[1371]: cilium_host: Gained IPv6LL Jan 17 12:19:21.865913 systemd-networkd[1371]: cilium_net: Gained IPv6LL Jan 17 12:19:22.052477 systemd-networkd[1371]: cilium_vxlan: Link UP Jan 17 12:19:22.052489 systemd-networkd[1371]: cilium_vxlan: Gained carrier Jan 17 12:19:22.265079 kubelet[2563]: E0117 12:19:22.263903 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:22.305666 systemd[1]: Started sshd@7-209.38.133.237:22-139.178.68.195:32820.service - OpenSSH per-connection server daemon (139.178.68.195:32820). Jan 17 12:19:22.406288 sshd[3475]: Accepted publickey for core from 139.178.68.195 port 32820 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:22.411549 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:22.443016 systemd-logind[1446]: New session 8 of user core. Jan 17 12:19:22.449222 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:19:22.701498 kernel: NET: Registered PF_ALG protocol family Jan 17 12:19:23.099009 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Jan 17 12:19:23.217619 sshd[3475]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:23.222981 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:19:23.223432 systemd[1]: sshd@7-209.38.133.237:22-139.178.68.195:32820.service: Deactivated successfully. Jan 17 12:19:23.229511 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:19:23.238550 systemd-logind[1446]: Removed session 8. Jan 17 12:19:23.976921 systemd-networkd[1371]: lxc_health: Link UP Jan 17 12:19:23.986292 systemd-networkd[1371]: lxc_health: Gained carrier Jan 17 12:19:24.526508 systemd-networkd[1371]: lxcf1996b6141fe: Link UP Jan 17 12:19:24.543289 kernel: eth0: renamed from tmpd1c72 Jan 17 12:19:24.549478 systemd-networkd[1371]: lxcee9a3c19e229: Link UP Jan 17 12:19:24.561202 kernel: eth0: renamed from tmpc90da Jan 17 12:19:24.570774 systemd-networkd[1371]: lxcf1996b6141fe: Gained carrier Jan 17 12:19:24.576793 systemd-networkd[1371]: lxcee9a3c19e229: Gained carrier Jan 17 12:19:25.118765 kubelet[2563]: E0117 12:19:25.118708 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:25.146083 kubelet[2563]: I0117 12:19:25.145643 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p7vx4" podStartSLOduration=11.320574171 podStartE2EDuration="29.145618797s" podCreationTimestamp="2025-01-17 12:18:56 +0000 UTC" firstStartedPulling="2025-01-17 12:18:57.361264513 +0000 UTC m=+14.676448618" lastFinishedPulling="2025-01-17 12:19:15.186309118 +0000 UTC m=+32.501493244" observedRunningTime="2025-01-17 12:19:20.306791465 +0000 UTC m=+37.621975593" watchObservedRunningTime="2025-01-17 12:19:25.145618797 +0000 UTC m=+42.460802939" Jan 17 12:19:25.274388 kubelet[2563]: E0117 12:19:25.273038 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:25.530605 systemd-networkd[1371]: lxc_health: Gained IPv6LL Jan 17 12:19:26.106768 systemd-networkd[1371]: lxcee9a3c19e229: Gained IPv6LL Jan 17 12:19:26.298480 systemd-networkd[1371]: lxcf1996b6141fe: Gained IPv6LL Jan 17 12:19:28.234446 systemd[1]: Started sshd@8-209.38.133.237:22-139.178.68.195:56976.service - OpenSSH per-connection server daemon (139.178.68.195:56976). Jan 17 12:19:28.338425 sshd[3782]: Accepted publickey for core from 139.178.68.195 port 56976 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:28.337117 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:28.347262 systemd-logind[1446]: New session 9 of user core. Jan 17 12:19:28.354458 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:19:28.582254 sshd[3782]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:28.588247 systemd[1]: sshd@8-209.38.133.237:22-139.178.68.195:56976.service: Deactivated successfully. Jan 17 12:19:28.592791 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:19:28.595234 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:19:28.599219 systemd-logind[1446]: Removed session 9. Jan 17 12:19:30.645712 containerd[1469]: time="2025-01-17T12:19:30.645529563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:30.646979 containerd[1469]: time="2025-01-17T12:19:30.646228344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:30.646979 containerd[1469]: time="2025-01-17T12:19:30.646317327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:30.646979 containerd[1469]: time="2025-01-17T12:19:30.646441157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:30.698502 systemd[1]: Started cri-containerd-d1c72f375f526613da0309745524f28d276c7405c7b250ba5b03a40683707199.scope - libcontainer container d1c72f375f526613da0309745524f28d276c7405c7b250ba5b03a40683707199. Jan 17 12:19:30.720418 containerd[1469]: time="2025-01-17T12:19:30.719717616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:30.723979 containerd[1469]: time="2025-01-17T12:19:30.722048986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:30.723979 containerd[1469]: time="2025-01-17T12:19:30.722097017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:30.723979 containerd[1469]: time="2025-01-17T12:19:30.723556605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:30.773474 systemd[1]: Started cri-containerd-c90da23fb63c00e72ba9367c5555961cbd6d5204fdfa2eb96f65ed3ecde400b6.scope - libcontainer container c90da23fb63c00e72ba9367c5555961cbd6d5204fdfa2eb96f65ed3ecde400b6. Jan 17 12:19:30.859923 containerd[1469]: time="2025-01-17T12:19:30.859842572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8wj85,Uid:5269cdeb-36e5-41d0-83cf-b247edad2306,Namespace:kube-system,Attempt:0,} returns sandbox id \"c90da23fb63c00e72ba9367c5555961cbd6d5204fdfa2eb96f65ed3ecde400b6\"" Jan 17 12:19:30.862480 kubelet[2563]: E0117 12:19:30.862437 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:30.868725 containerd[1469]: time="2025-01-17T12:19:30.868301352Z" level=info msg="CreateContainer within sandbox \"c90da23fb63c00e72ba9367c5555961cbd6d5204fdfa2eb96f65ed3ecde400b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:30.913297 containerd[1469]: time="2025-01-17T12:19:30.912118373Z" level=info msg="CreateContainer within sandbox \"c90da23fb63c00e72ba9367c5555961cbd6d5204fdfa2eb96f65ed3ecde400b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a763fd1f2432306529753fa81ddac8c548d85d154111b63e6a8eb83a2e2b700\"" Jan 17 12:19:30.917366 containerd[1469]: time="2025-01-17T12:19:30.916519219Z" level=info msg="StartContainer for \"9a763fd1f2432306529753fa81ddac8c548d85d154111b63e6a8eb83a2e2b700\"" Jan 17 12:19:30.920293 containerd[1469]: time="2025-01-17T12:19:30.919900265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wv7fz,Uid:7bf8b93a-54d9-47f3-9a02-aa7e3c8ddcb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1c72f375f526613da0309745524f28d276c7405c7b250ba5b03a40683707199\"" Jan 17 12:19:30.923382 kubelet[2563]: E0117 12:19:30.922855 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:30.931860 containerd[1469]: time="2025-01-17T12:19:30.931789721Z" level=info msg="CreateContainer within sandbox \"d1c72f375f526613da0309745524f28d276c7405c7b250ba5b03a40683707199\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:30.959372 containerd[1469]: time="2025-01-17T12:19:30.958979492Z" level=info msg="CreateContainer within sandbox \"d1c72f375f526613da0309745524f28d276c7405c7b250ba5b03a40683707199\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6788e8c060cf8972ee0c77bdf7fd9c1f59ab3905b04350ef3329025584f6ddfa\"" Jan 17 12:19:30.966338 containerd[1469]: time="2025-01-17T12:19:30.965658643Z" level=info msg="StartContainer for \"6788e8c060cf8972ee0c77bdf7fd9c1f59ab3905b04350ef3329025584f6ddfa\"" Jan 17 12:19:31.030534 systemd[1]: Started cri-containerd-9a763fd1f2432306529753fa81ddac8c548d85d154111b63e6a8eb83a2e2b700.scope - libcontainer container 9a763fd1f2432306529753fa81ddac8c548d85d154111b63e6a8eb83a2e2b700. Jan 17 12:19:31.066679 systemd[1]: Started cri-containerd-6788e8c060cf8972ee0c77bdf7fd9c1f59ab3905b04350ef3329025584f6ddfa.scope - libcontainer container 6788e8c060cf8972ee0c77bdf7fd9c1f59ab3905b04350ef3329025584f6ddfa. Jan 17 12:19:31.121997 containerd[1469]: time="2025-01-17T12:19:31.121916800Z" level=info msg="StartContainer for \"9a763fd1f2432306529753fa81ddac8c548d85d154111b63e6a8eb83a2e2b700\" returns successfully" Jan 17 12:19:31.149181 containerd[1469]: time="2025-01-17T12:19:31.149094138Z" level=info msg="StartContainer for \"6788e8c060cf8972ee0c77bdf7fd9c1f59ab3905b04350ef3329025584f6ddfa\" returns successfully" Jan 17 12:19:31.300190 kubelet[2563]: E0117 12:19:31.299857 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.325110 kubelet[2563]: E0117 12:19:31.323568 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.369878 kubelet[2563]: I0117 12:19:31.369345 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wv7fz" podStartSLOduration=35.369320171 podStartE2EDuration="35.369320171s" podCreationTimestamp="2025-01-17 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:31.367326413 +0000 UTC m=+48.682510555" watchObservedRunningTime="2025-01-17 12:19:31.369320171 +0000 UTC m=+48.684504313" Jan 17 12:19:31.658198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657781181.mount: Deactivated successfully. Jan 17 12:19:32.316338 kubelet[2563]: E0117 12:19:32.315406 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:32.319208 kubelet[2563]: E0117 12:19:32.318039 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:32.340539 kubelet[2563]: I0117 12:19:32.340422 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8wj85" podStartSLOduration=36.340394415 podStartE2EDuration="36.340394415s" podCreationTimestamp="2025-01-17 12:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:31.414559226 +0000 UTC m=+48.729743368" watchObservedRunningTime="2025-01-17 12:19:32.340394415 +0000 UTC m=+49.655578551" Jan 17 12:19:33.318199 kubelet[2563]: E0117 12:19:33.317677 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:33.318199 kubelet[2563]: E0117 12:19:33.317701 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:33.612483 systemd[1]: Started sshd@9-209.38.133.237:22-139.178.68.195:56982.service - OpenSSH per-connection server daemon (139.178.68.195:56982). Jan 17 12:19:33.714367 sshd[3971]: Accepted publickey for core from 139.178.68.195 port 56982 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:33.717308 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:33.724318 systemd-logind[1446]: New session 10 of user core. Jan 17 12:19:33.729462 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:19:33.925001 sshd[3971]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:33.932060 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:19:33.932497 systemd[1]: sshd@9-209.38.133.237:22-139.178.68.195:56982.service: Deactivated successfully. Jan 17 12:19:33.936107 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:19:33.937852 systemd-logind[1446]: Removed session 10. Jan 17 12:19:38.942526 systemd[1]: Started sshd@10-209.38.133.237:22-139.178.68.195:36142.service - OpenSSH per-connection server daemon (139.178.68.195:36142). Jan 17 12:19:39.009203 sshd[3985]: Accepted publickey for core from 139.178.68.195 port 36142 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:39.010592 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:39.018020 systemd-logind[1446]: New session 11 of user core. Jan 17 12:19:39.029490 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:19:39.169931 sshd[3985]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:39.181590 systemd[1]: sshd@10-209.38.133.237:22-139.178.68.195:36142.service: Deactivated successfully. Jan 17 12:19:39.185454 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:19:39.189075 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:19:39.196621 systemd[1]: Started sshd@11-209.38.133.237:22-139.178.68.195:36154.service - OpenSSH per-connection server daemon (139.178.68.195:36154). Jan 17 12:19:39.200861 systemd-logind[1446]: Removed session 11. Jan 17 12:19:39.254094 sshd[3998]: Accepted publickey for core from 139.178.68.195 port 36154 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:39.256549 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:39.262870 systemd-logind[1446]: New session 12 of user core. Jan 17 12:19:39.270524 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:19:39.541037 sshd[3998]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:39.555689 systemd[1]: sshd@11-209.38.133.237:22-139.178.68.195:36154.service: Deactivated successfully. Jan 17 12:19:39.563357 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:19:39.571348 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:19:39.579727 systemd[1]: Started sshd@12-209.38.133.237:22-139.178.68.195:36168.service - OpenSSH per-connection server daemon (139.178.68.195:36168). Jan 17 12:19:39.588524 systemd-logind[1446]: Removed session 12. Jan 17 12:19:39.659931 sshd[4009]: Accepted publickey for core from 139.178.68.195 port 36168 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:39.661558 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:39.671335 systemd-logind[1446]: New session 13 of user core. Jan 17 12:19:39.677553 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:19:39.833216 sshd[4009]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:39.838529 systemd[1]: sshd@12-209.38.133.237:22-139.178.68.195:36168.service: Deactivated successfully. Jan 17 12:19:39.841546 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:19:39.842719 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:19:39.844342 systemd-logind[1446]: Removed session 13. Jan 17 12:19:44.856647 systemd[1]: Started sshd@13-209.38.133.237:22-139.178.68.195:41676.service - OpenSSH per-connection server daemon (139.178.68.195:41676). Jan 17 12:19:44.922248 sshd[4025]: Accepted publickey for core from 139.178.68.195 port 41676 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:44.923309 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:44.933364 systemd-logind[1446]: New session 14 of user core. Jan 17 12:19:44.935502 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:19:45.088555 sshd[4025]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:45.095231 systemd[1]: sshd@13-209.38.133.237:22-139.178.68.195:41676.service: Deactivated successfully. Jan 17 12:19:45.099065 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:19:45.100512 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:19:45.101748 systemd-logind[1446]: Removed session 14. Jan 17 12:19:50.110630 systemd[1]: Started sshd@14-209.38.133.237:22-139.178.68.195:41690.service - OpenSSH per-connection server daemon (139.178.68.195:41690). Jan 17 12:19:50.174345 sshd[4037]: Accepted publickey for core from 139.178.68.195 port 41690 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:50.176856 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:50.187358 systemd-logind[1446]: New session 15 of user core. Jan 17 12:19:50.193581 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:19:50.430914 sshd[4037]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:50.444005 systemd[1]: sshd@14-209.38.133.237:22-139.178.68.195:41690.service: Deactivated successfully. Jan 17 12:19:50.446788 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:19:50.450043 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:19:50.460000 systemd[1]: Started sshd@15-209.38.133.237:22-139.178.68.195:41698.service - OpenSSH per-connection server daemon (139.178.68.195:41698). Jan 17 12:19:50.462539 systemd-logind[1446]: Removed session 15. Jan 17 12:19:50.520744 sshd[4050]: Accepted publickey for core from 139.178.68.195 port 41698 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:50.522949 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:50.532913 systemd-logind[1446]: New session 16 of user core. Jan 17 12:19:50.537618 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:19:50.886236 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:50.903812 systemd[1]: sshd@15-209.38.133.237:22-139.178.68.195:41698.service: Deactivated successfully. Jan 17 12:19:50.906789 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:19:50.910002 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:19:50.917745 systemd[1]: Started sshd@16-209.38.133.237:22-139.178.68.195:41714.service - OpenSSH per-connection server daemon (139.178.68.195:41714). Jan 17 12:19:50.921304 systemd-logind[1446]: Removed session 16. Jan 17 12:19:50.989791 sshd[4061]: Accepted publickey for core from 139.178.68.195 port 41714 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:50.992698 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:51.002528 systemd-logind[1446]: New session 17 of user core. Jan 17 12:19:51.020550 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:19:53.621151 sshd[4061]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:53.648311 systemd[1]: Started sshd@17-209.38.133.237:22-139.178.68.195:41716.service - OpenSSH per-connection server daemon (139.178.68.195:41716). Jan 17 12:19:53.650742 systemd[1]: sshd@16-209.38.133.237:22-139.178.68.195:41714.service: Deactivated successfully. Jan 17 12:19:53.661288 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:19:53.671275 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:19:53.678290 systemd-logind[1446]: Removed session 17. Jan 17 12:19:53.747308 sshd[4074]: Accepted publickey for core from 139.178.68.195 port 41716 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:53.749610 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:53.765259 systemd-logind[1446]: New session 18 of user core. Jan 17 12:19:53.776193 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:19:54.211830 sshd[4074]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:54.228808 systemd[1]: sshd@17-209.38.133.237:22-139.178.68.195:41716.service: Deactivated successfully. Jan 17 12:19:54.233578 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:19:54.238080 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:19:54.248260 systemd[1]: Started sshd@18-209.38.133.237:22-139.178.68.195:41718.service - OpenSSH per-connection server daemon (139.178.68.195:41718). Jan 17 12:19:54.250463 systemd-logind[1446]: Removed session 18. Jan 17 12:19:54.317710 sshd[4089]: Accepted publickey for core from 139.178.68.195 port 41718 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:54.321607 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:54.332224 systemd-logind[1446]: New session 19 of user core. Jan 17 12:19:54.340599 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:19:54.522436 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:54.531004 systemd[1]: sshd@18-209.38.133.237:22-139.178.68.195:41718.service: Deactivated successfully. Jan 17 12:19:54.534675 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:19:54.536670 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:19:54.538088 systemd-logind[1446]: Removed session 19. Jan 17 12:19:59.545430 systemd[1]: Started sshd@19-209.38.133.237:22-139.178.68.195:33498.service - OpenSSH per-connection server daemon (139.178.68.195:33498). Jan 17 12:19:59.597093 sshd[4107]: Accepted publickey for core from 139.178.68.195 port 33498 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:59.599397 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:59.607383 systemd-logind[1446]: New session 20 of user core. Jan 17 12:19:59.619537 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:19:59.767893 sshd[4107]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:59.771606 systemd[1]: sshd@19-209.38.133.237:22-139.178.68.195:33498.service: Deactivated successfully. Jan 17 12:19:59.774337 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:19:59.777665 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:19:59.779055 systemd-logind[1446]: Removed session 20. Jan 17 12:20:04.805790 systemd[1]: Started sshd@20-209.38.133.237:22-139.178.68.195:53340.service - OpenSSH per-connection server daemon (139.178.68.195:53340). Jan 17 12:20:04.892769 sshd[4120]: Accepted publickey for core from 139.178.68.195 port 53340 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:04.896651 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:04.915318 systemd-logind[1446]: New session 21 of user core. Jan 17 12:20:04.920551 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:20:05.153053 sshd[4120]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:05.161841 systemd[1]: sshd@20-209.38.133.237:22-139.178.68.195:53340.service: Deactivated successfully. Jan 17 12:20:05.172524 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:20:05.174151 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:20:05.176889 systemd-logind[1446]: Removed session 21. Jan 17 12:20:08.886903 kubelet[2563]: E0117 12:20:08.884472 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:10.189812 systemd[1]: Started sshd@21-209.38.133.237:22-139.178.68.195:53352.service - OpenSSH per-connection server daemon (139.178.68.195:53352). Jan 17 12:20:10.251865 sshd[4133]: Accepted publickey for core from 139.178.68.195 port 53352 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:10.254514 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:10.264123 systemd-logind[1446]: New session 22 of user core. Jan 17 12:20:10.271988 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:20:10.477423 sshd[4133]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:10.481842 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:20:10.482275 systemd[1]: sshd@21-209.38.133.237:22-139.178.68.195:53352.service: Deactivated successfully. Jan 17 12:20:10.485345 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:20:10.489973 systemd-logind[1446]: Removed session 22. Jan 17 12:20:12.881893 kubelet[2563]: E0117 12:20:12.881762 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:14.882224 kubelet[2563]: E0117 12:20:14.881684 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:15.501802 systemd[1]: Started sshd@22-209.38.133.237:22-139.178.68.195:59612.service - OpenSSH per-connection server daemon (139.178.68.195:59612). Jan 17 12:20:15.585410 sshd[4146]: Accepted publickey for core from 139.178.68.195 port 59612 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:15.588248 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:15.597271 systemd-logind[1446]: New session 23 of user core. Jan 17 12:20:15.606519 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:20:15.792958 sshd[4146]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:15.808182 systemd[1]: sshd@22-209.38.133.237:22-139.178.68.195:59612.service: Deactivated successfully. Jan 17 12:20:15.811484 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:20:15.813045 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:20:15.828905 systemd[1]: Started sshd@23-209.38.133.237:22-139.178.68.195:59622.service - OpenSSH per-connection server daemon (139.178.68.195:59622). Jan 17 12:20:15.829927 systemd-logind[1446]: Removed session 23. Jan 17 12:20:15.898576 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 59622 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:15.901954 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:15.910362 systemd-logind[1446]: New session 24 of user core. Jan 17 12:20:15.916509 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:20:17.768078 containerd[1469]: time="2025-01-17T12:20:17.767511982Z" level=info msg="StopContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" with timeout 30 (s)" Jan 17 12:20:17.773738 containerd[1469]: time="2025-01-17T12:20:17.773643957Z" level=info msg="Stop container \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" with signal terminated" Jan 17 12:20:17.781237 containerd[1469]: time="2025-01-17T12:20:17.780577479Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:20:17.804273 containerd[1469]: time="2025-01-17T12:20:17.801446836Z" level=info msg="StopContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" with timeout 2 (s)" Jan 17 12:20:17.804695 containerd[1469]: time="2025-01-17T12:20:17.804661476Z" level=info msg="Stop container \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" with signal terminated" Jan 17 12:20:17.809325 systemd[1]: cri-containerd-4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546.scope: Deactivated successfully. Jan 17 12:20:17.828696 systemd-networkd[1371]: lxc_health: Link DOWN Jan 17 12:20:17.828713 systemd-networkd[1371]: lxc_health: Lost carrier Jan 17 12:20:17.885518 kubelet[2563]: E0117 12:20:17.885471 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:17.892180 systemd[1]: cri-containerd-9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c.scope: Deactivated successfully. Jan 17 12:20:17.892522 systemd[1]: cri-containerd-9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c.scope: Consumed 10.136s CPU time. Jan 17 12:20:17.907012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546-rootfs.mount: Deactivated successfully. Jan 17 12:20:17.919138 containerd[1469]: time="2025-01-17T12:20:17.918937376Z" level=info msg="shim disconnected" id=4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546 namespace=k8s.io Jan 17 12:20:17.919138 containerd[1469]: time="2025-01-17T12:20:17.919209497Z" level=warning msg="cleaning up after shim disconnected" id=4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546 namespace=k8s.io Jan 17 12:20:17.919138 containerd[1469]: time="2025-01-17T12:20:17.919233072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:17.974391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c-rootfs.mount: Deactivated successfully. Jan 17 12:20:17.986032 containerd[1469]: time="2025-01-17T12:20:17.985676528Z" level=info msg="shim disconnected" id=9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c namespace=k8s.io Jan 17 12:20:17.986032 containerd[1469]: time="2025-01-17T12:20:17.985785739Z" level=warning msg="cleaning up after shim disconnected" id=9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c namespace=k8s.io Jan 17 12:20:17.986032 containerd[1469]: time="2025-01-17T12:20:17.985799208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:17.990233 containerd[1469]: time="2025-01-17T12:20:17.989528702Z" level=info msg="StopContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" returns successfully" Jan 17 12:20:17.998862 containerd[1469]: time="2025-01-17T12:20:17.998553451Z" level=info msg="StopPodSandbox for \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\"" Jan 17 12:20:17.998862 containerd[1469]: time="2025-01-17T12:20:17.998689440Z" level=info msg="Container to stop \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.008793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550-shm.mount: Deactivated successfully. Jan 17 12:20:18.025554 systemd[1]: cri-containerd-9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550.scope: Deactivated successfully. Jan 17 12:20:18.037111 containerd[1469]: time="2025-01-17T12:20:18.036622403Z" level=info msg="StopContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" returns successfully" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.037824737Z" level=info msg="StopPodSandbox for \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\"" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.037898910Z" level=info msg="Container to stop \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.037918191Z" level=info msg="Container to stop \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.037937099Z" level=info msg="Container to stop \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.037988456Z" level=info msg="Container to stop \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.039660 containerd[1469]: time="2025-01-17T12:20:18.038006086Z" level=info msg="Container to stop \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:20:18.044098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2-shm.mount: Deactivated successfully. Jan 17 12:20:18.059063 systemd[1]: cri-containerd-590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2.scope: Deactivated successfully. Jan 17 12:20:18.101235 containerd[1469]: time="2025-01-17T12:20:18.100809877Z" level=info msg="shim disconnected" id=9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550 namespace=k8s.io Jan 17 12:20:18.101235 containerd[1469]: time="2025-01-17T12:20:18.100890951Z" level=warning msg="cleaning up after shim disconnected" id=9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550 namespace=k8s.io Jan 17 12:20:18.101235 containerd[1469]: time="2025-01-17T12:20:18.100909816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:18.131269 containerd[1469]: time="2025-01-17T12:20:18.130118843Z" level=info msg="shim disconnected" id=590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2 namespace=k8s.io Jan 17 12:20:18.131269 containerd[1469]: time="2025-01-17T12:20:18.130232218Z" level=warning msg="cleaning up after shim disconnected" id=590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2 namespace=k8s.io Jan 17 12:20:18.131269 containerd[1469]: time="2025-01-17T12:20:18.130245896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:18.162412 containerd[1469]: time="2025-01-17T12:20:18.162351507Z" level=info msg="TearDown network for sandbox \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\" successfully" Jan 17 12:20:18.163281 containerd[1469]: time="2025-01-17T12:20:18.162654631Z" level=info msg="StopPodSandbox for \"9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550\" returns successfully" Jan 17 12:20:18.163281 containerd[1469]: time="2025-01-17T12:20:18.162412332Z" level=info msg="TearDown network for sandbox \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" successfully" Jan 17 12:20:18.163281 containerd[1469]: time="2025-01-17T12:20:18.162902114Z" level=info msg="StopPodSandbox for \"590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2\" returns successfully" Jan 17 12:20:18.187848 kubelet[2563]: E0117 12:20:18.163052 2563 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:20:18.269901 kubelet[2563]: I0117 12:20:18.268326 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b9811b-054a-48ee-8dfa-a704ac286526-cilium-config-path\") pod \"92b9811b-054a-48ee-8dfa-a704ac286526\" (UID: \"92b9811b-054a-48ee-8dfa-a704ac286526\") " Jan 17 12:20:18.269901 kubelet[2563]: I0117 12:20:18.268481 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkxg\" (UniqueName: \"kubernetes.io/projected/92b9811b-054a-48ee-8dfa-a704ac286526-kube-api-access-rqkxg\") pod \"92b9811b-054a-48ee-8dfa-a704ac286526\" (UID: \"92b9811b-054a-48ee-8dfa-a704ac286526\") " Jan 17 12:20:18.280859 kubelet[2563]: I0117 12:20:18.275150 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b9811b-054a-48ee-8dfa-a704ac286526-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92b9811b-054a-48ee-8dfa-a704ac286526" (UID: "92b9811b-054a-48ee-8dfa-a704ac286526"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:20:18.282021 kubelet[2563]: I0117 12:20:18.281920 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b9811b-054a-48ee-8dfa-a704ac286526-kube-api-access-rqkxg" (OuterVolumeSpecName: "kube-api-access-rqkxg") pod "92b9811b-054a-48ee-8dfa-a704ac286526" (UID: "92b9811b-054a-48ee-8dfa-a704ac286526"). InnerVolumeSpecName "kube-api-access-rqkxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:20:18.369128 kubelet[2563]: I0117 12:20:18.369000 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.369547 kubelet[2563]: I0117 12:20:18.369514 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-net\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.374288 kubelet[2563]: I0117 12:20:18.369737 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxjr7\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374608 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-bpf-maps\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374652 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-kernel\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374676 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-etc-cni-netd\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374698 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-run\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374735 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2982d53b-908f-43a9-a46b-8b9e9f1749f8-clustermesh-secrets\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375186 kubelet[2563]: I0117 12:20:18.374789 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-lib-modules\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374824 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-config-path\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374857 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hubble-tls\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374898 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hostproc\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374925 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cni-path\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374951 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-xtables-lock\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375549 kubelet[2563]: I0117 12:20:18.374977 2563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-cgroup\") pod \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\" (UID: \"2982d53b-908f-43a9-a46b-8b9e9f1749f8\") " Jan 17 12:20:18.375898 kubelet[2563]: I0117 12:20:18.375151 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7" (OuterVolumeSpecName: "kube-api-access-vxjr7") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "kube-api-access-vxjr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:20:18.375898 kubelet[2563]: I0117 12:20:18.375287 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.375898 kubelet[2563]: I0117 12:20:18.375313 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.375898 kubelet[2563]: I0117 12:20:18.375335 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.375898 kubelet[2563]: I0117 12:20:18.375360 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.377993 kubelet[2563]: I0117 12:20:18.375384 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.379303 kubelet[2563]: I0117 12:20:18.379253 2563 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b9811b-054a-48ee-8dfa-a704ac286526-cilium-config-path\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.379497 kubelet[2563]: I0117 12:20:18.379470 2563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-net\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.379578 kubelet[2563]: I0117 12:20:18.379564 2563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rqkxg\" (UniqueName: \"kubernetes.io/projected/92b9811b-054a-48ee-8dfa-a704ac286526-kube-api-access-rqkxg\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.379733 kubelet[2563]: I0117 12:20:18.379699 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.382618 kubelet[2563]: I0117 12:20:18.382072 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2982d53b-908f-43a9-a46b-8b9e9f1749f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:20:18.382781 kubelet[2563]: I0117 12:20:18.382759 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.384220 kubelet[2563]: I0117 12:20:18.384122 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:20:18.384427 kubelet[2563]: I0117 12:20:18.384411 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.384505 kubelet[2563]: I0117 12:20:18.384495 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:20:18.388007 kubelet[2563]: I0117 12:20:18.387951 2563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2982d53b-908f-43a9-a46b-8b9e9f1749f8" (UID: "2982d53b-908f-43a9-a46b-8b9e9f1749f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.480984 2563 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-xtables-lock\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481042 2563 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hostproc\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481057 2563 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cni-path\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481071 2563 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-cgroup\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481115 2563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vxjr7\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-kube-api-access-vxjr7\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481132 2563 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-bpf-maps\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481147 2563 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-etc-cni-netd\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.482551 kubelet[2563]: I0117 12:20:18.481201 2563 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-run\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.483048 kubelet[2563]: I0117 12:20:18.481217 2563 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2982d53b-908f-43a9-a46b-8b9e9f1749f8-clustermesh-secrets\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.483048 kubelet[2563]: I0117 12:20:18.481231 2563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-host-proc-sys-kernel\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.483048 kubelet[2563]: I0117 12:20:18.481246 2563 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2982d53b-908f-43a9-a46b-8b9e9f1749f8-hubble-tls\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.483048 kubelet[2563]: I0117 12:20:18.481264 2563 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2982d53b-908f-43a9-a46b-8b9e9f1749f8-lib-modules\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.483048 kubelet[2563]: I0117 12:20:18.481277 2563 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2982d53b-908f-43a9-a46b-8b9e9f1749f8-cilium-config-path\") on node \"ci-4081.3.0-8-018bcc3779\" DevicePath \"\"" Jan 17 12:20:18.528193 kubelet[2563]: I0117 12:20:18.526509 2563 scope.go:117] "RemoveContainer" containerID="9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c" Jan 17 12:20:18.563805 systemd[1]: Removed slice kubepods-burstable-pod2982d53b_908f_43a9_a46b_8b9e9f1749f8.slice - libcontainer container kubepods-burstable-pod2982d53b_908f_43a9_a46b_8b9e9f1749f8.slice. Jan 17 12:20:18.565243 systemd[1]: kubepods-burstable-pod2982d53b_908f_43a9_a46b_8b9e9f1749f8.slice: Consumed 10.256s CPU time. Jan 17 12:20:18.570504 containerd[1469]: time="2025-01-17T12:20:18.568767965Z" level=info msg="RemoveContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\"" Jan 17 12:20:18.585288 containerd[1469]: time="2025-01-17T12:20:18.585209564Z" level=info msg="RemoveContainer for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" returns successfully" Jan 17 12:20:18.589752 kubelet[2563]: I0117 12:20:18.589688 2563 scope.go:117] "RemoveContainer" containerID="71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246" Jan 17 12:20:18.604445 systemd[1]: Removed slice kubepods-besteffort-pod92b9811b_054a_48ee_8dfa_a704ac286526.slice - libcontainer container kubepods-besteffort-pod92b9811b_054a_48ee_8dfa_a704ac286526.slice. Jan 17 12:20:18.609413 containerd[1469]: time="2025-01-17T12:20:18.609365831Z" level=info msg="RemoveContainer for \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\"" Jan 17 12:20:18.615966 containerd[1469]: time="2025-01-17T12:20:18.615902610Z" level=info msg="RemoveContainer for \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\" returns successfully" Jan 17 12:20:18.616865 kubelet[2563]: I0117 12:20:18.616829 2563 scope.go:117] "RemoveContainer" containerID="f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f" Jan 17 12:20:18.620135 containerd[1469]: time="2025-01-17T12:20:18.619883145Z" level=info msg="RemoveContainer for \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\"" Jan 17 12:20:18.627367 containerd[1469]: time="2025-01-17T12:20:18.627210782Z" level=info msg="RemoveContainer for \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\" returns successfully" Jan 17 12:20:18.631407 kubelet[2563]: I0117 12:20:18.631279 2563 scope.go:117] "RemoveContainer" containerID="07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1" Jan 17 12:20:18.635183 containerd[1469]: time="2025-01-17T12:20:18.634081070Z" level=info msg="RemoveContainer for \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\"" Jan 17 12:20:18.643026 containerd[1469]: time="2025-01-17T12:20:18.642962858Z" level=info msg="RemoveContainer for \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\" returns successfully" Jan 17 12:20:18.645416 kubelet[2563]: I0117 12:20:18.643507 2563 scope.go:117] "RemoveContainer" containerID="0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae" Jan 17 12:20:18.653001 containerd[1469]: time="2025-01-17T12:20:18.652921980Z" level=info msg="RemoveContainer for \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\"" Jan 17 12:20:18.659014 containerd[1469]: time="2025-01-17T12:20:18.658939671Z" level=info msg="RemoveContainer for \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\" returns successfully" Jan 17 12:20:18.659782 kubelet[2563]: I0117 12:20:18.659364 2563 scope.go:117] "RemoveContainer" containerID="9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c" Jan 17 12:20:18.675437 containerd[1469]: time="2025-01-17T12:20:18.663518311Z" level=error msg="ContainerStatus for \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\": not found" Jan 17 12:20:18.676130 kubelet[2563]: E0117 12:20:18.676071 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\": not found" containerID="9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c" Jan 17 12:20:18.681448 kubelet[2563]: I0117 12:20:18.677255 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c"} err="failed to get container status \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eea8501f014d1c63a97b446df5f66028605d0f6cfe852c3410298b504dd443c\": not found" Jan 17 12:20:18.681448 kubelet[2563]: I0117 12:20:18.681400 2563 scope.go:117] "RemoveContainer" containerID="71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246" Jan 17 12:20:18.682676 containerd[1469]: time="2025-01-17T12:20:18.682414353Z" level=error msg="ContainerStatus for \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\": not found" Jan 17 12:20:18.683109 kubelet[2563]: E0117 12:20:18.682967 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\": not found" containerID="71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246" Jan 17 12:20:18.683109 kubelet[2563]: I0117 12:20:18.683007 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246"} err="failed to get container status \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\": rpc error: code = NotFound desc = an error occurred when try to find container \"71fc8e1a4895cd354aba6d67684f89505b35601f863cd83058e56566547d5246\": not found" Jan 17 12:20:18.683109 kubelet[2563]: I0117 12:20:18.683061 2563 scope.go:117] "RemoveContainer" containerID="f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f" Jan 17 12:20:18.683991 containerd[1469]: time="2025-01-17T12:20:18.683821071Z" level=error msg="ContainerStatus for \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\": not found" Jan 17 12:20:18.684376 kubelet[2563]: E0117 12:20:18.684174 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\": not found" containerID="f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f" Jan 17 12:20:18.684679 kubelet[2563]: I0117 12:20:18.684209 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f"} err="failed to get container status \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f63163684144c5ccbe0e4c77fcf5ae09967d8cf8696d6d4942395ac28f82be6f\": not found" Jan 17 12:20:18.684679 kubelet[2563]: I0117 12:20:18.684523 2563 scope.go:117] "RemoveContainer" containerID="07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1" Jan 17 12:20:18.685395 containerd[1469]: time="2025-01-17T12:20:18.685053696Z" level=error msg="ContainerStatus for \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\": not found" Jan 17 12:20:18.685533 kubelet[2563]: E0117 12:20:18.685352 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\": not found" containerID="07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1" Jan 17 12:20:18.685615 kubelet[2563]: I0117 12:20:18.685385 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1"} err="failed to get container status \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"07896420bddee4d82ac4e46157b61aed59f041c31704a8eeb9ca7dc9883146b1\": not found" Jan 17 12:20:18.685664 kubelet[2563]: I0117 12:20:18.685617 2563 scope.go:117] "RemoveContainer" containerID="0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae" Jan 17 12:20:18.686230 containerd[1469]: time="2025-01-17T12:20:18.686070474Z" level=error msg="ContainerStatus for \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\": not found" Jan 17 12:20:18.686348 kubelet[2563]: E0117 12:20:18.686273 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\": not found" containerID="0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae" Jan 17 12:20:18.686348 kubelet[2563]: I0117 12:20:18.686303 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae"} err="failed to get container status \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a2b06702a8e11901665afb56374e9400464b1f3b8dac59051d424a4f999a4ae\": not found" Jan 17 12:20:18.686348 kubelet[2563]: I0117 12:20:18.686326 2563 scope.go:117] "RemoveContainer" containerID="4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546" Jan 17 12:20:18.688401 containerd[1469]: time="2025-01-17T12:20:18.688346587Z" level=info msg="RemoveContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\"" Jan 17 12:20:18.694721 containerd[1469]: time="2025-01-17T12:20:18.694621869Z" level=info msg="RemoveContainer for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" returns successfully" Jan 17 12:20:18.695625 kubelet[2563]: I0117 12:20:18.695055 2563 scope.go:117] "RemoveContainer" containerID="4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546" Jan 17 12:20:18.695772 containerd[1469]: time="2025-01-17T12:20:18.695511028Z" level=error msg="ContainerStatus for \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\": not found" Jan 17 12:20:18.698807 kubelet[2563]: E0117 12:20:18.698628 2563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\": not found" containerID="4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546" Jan 17 12:20:18.698807 kubelet[2563]: I0117 12:20:18.698694 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546"} err="failed to get container status \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\": rpc error: code = NotFound desc = an error occurred when try to find container \"4801516a4a221b68b55e9c7ad932d9953006125931b254fcd15fbe525220a546\": not found" Jan 17 12:20:18.737848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-590736cba18d1b710dd9f77069a53d8ced1a6f994062658bf6ba012334d3e6a2-rootfs.mount: Deactivated successfully. Jan 17 12:20:18.738105 systemd[1]: var-lib-kubelet-pods-2982d53b\x2d908f\x2d43a9\x2da46b\x2d8b9e9f1749f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxjr7.mount: Deactivated successfully. Jan 17 12:20:18.738255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ca4f165f731ebde4fbc02604efefca218509a6b4a0de7fa6b5dae2cb6ccc550-rootfs.mount: Deactivated successfully. Jan 17 12:20:18.738352 systemd[1]: var-lib-kubelet-pods-92b9811b\x2d054a\x2d48ee\x2d8dfa\x2da704ac286526-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqkxg.mount: Deactivated successfully. Jan 17 12:20:18.738447 systemd[1]: var-lib-kubelet-pods-2982d53b\x2d908f\x2d43a9\x2da46b\x2d8b9e9f1749f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:20:18.738537 systemd[1]: var-lib-kubelet-pods-2982d53b\x2d908f\x2d43a9\x2da46b\x2d8b9e9f1749f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:20:18.887292 kubelet[2563]: I0117 12:20:18.886974 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" path="/var/lib/kubelet/pods/2982d53b-908f-43a9-a46b-8b9e9f1749f8/volumes" Jan 17 12:20:18.890627 kubelet[2563]: I0117 12:20:18.888081 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b9811b-054a-48ee-8dfa-a704ac286526" path="/var/lib/kubelet/pods/92b9811b-054a-48ee-8dfa-a704ac286526/volumes" Jan 17 12:20:19.496429 sshd[4158]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:19.514360 systemd[1]: sshd@23-209.38.133.237:22-139.178.68.195:59622.service: Deactivated successfully. Jan 17 12:20:19.519320 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:20:19.525477 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:20:19.533809 systemd[1]: Started sshd@24-209.38.133.237:22-139.178.68.195:59636.service - OpenSSH per-connection server daemon (139.178.68.195:59636). Jan 17 12:20:19.536144 systemd-logind[1446]: Removed session 24. Jan 17 12:20:19.620534 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 59636 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:19.622576 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:19.633778 systemd-logind[1446]: New session 25 of user core. Jan 17 12:20:19.642678 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:20:20.841854 sshd[4320]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:20.862137 systemd[1]: sshd@24-209.38.133.237:22-139.178.68.195:59636.service: Deactivated successfully. Jan 17 12:20:20.872650 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:20:20.886384 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:20:20.894910 systemd[1]: Started sshd@25-209.38.133.237:22-139.178.68.195:59646.service - OpenSSH per-connection server daemon (139.178.68.195:59646). Jan 17 12:20:20.900774 systemd-logind[1446]: Removed session 25. Jan 17 12:20:20.903582 kubelet[2563]: I0117 12:20:20.893928 2563 topology_manager.go:215] "Topology Admit Handler" podUID="23d5bb0c-be70-41ce-a72b-d689f616127d" podNamespace="kube-system" podName="cilium-njklk" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909210 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92b9811b-054a-48ee-8dfa-a704ac286526" containerName="cilium-operator" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909259 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="mount-bpf-fs" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909270 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="cilium-agent" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909281 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="mount-cgroup" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909291 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="apply-sysctl-overwrites" Jan 17 12:20:20.910195 kubelet[2563]: E0117 12:20:20.909306 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="clean-cilium-state" Jan 17 12:20:20.918219 kubelet[2563]: I0117 12:20:20.909346 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b9811b-054a-48ee-8dfa-a704ac286526" containerName="cilium-operator" Jan 17 12:20:20.918503 kubelet[2563]: I0117 12:20:20.918450 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="2982d53b-908f-43a9-a46b-8b9e9f1749f8" containerName="cilium-agent" Jan 17 12:20:21.010601 systemd[1]: Created slice kubepods-burstable-pod23d5bb0c_be70_41ce_a72b_d689f616127d.slice - libcontainer container kubepods-burstable-pod23d5bb0c_be70_41ce_a72b_d689f616127d.slice. Jan 17 12:20:21.027385 kubelet[2563]: I0117 12:20:21.027307 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-cilium-run\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.029877 kubelet[2563]: I0117 12:20:21.029376 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-cilium-cgroup\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030425 kubelet[2563]: I0117 12:20:21.030201 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23d5bb0c-be70-41ce-a72b-d689f616127d-clustermesh-secrets\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030425 kubelet[2563]: I0117 12:20:21.030272 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-hostproc\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030425 kubelet[2563]: I0117 12:20:21.030311 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/23d5bb0c-be70-41ce-a72b-d689f616127d-cilium-ipsec-secrets\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030425 kubelet[2563]: I0117 12:20:21.030363 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-xtables-lock\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030425 kubelet[2563]: I0117 12:20:21.030391 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d5bb0c-be70-41ce-a72b-d689f616127d-cilium-config-path\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030949 kubelet[2563]: I0117 12:20:21.030740 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-host-proc-sys-kernel\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030949 kubelet[2563]: I0117 12:20:21.030801 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23d5bb0c-be70-41ce-a72b-d689f616127d-hubble-tls\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030949 kubelet[2563]: I0117 12:20:21.030828 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwsr\" (UniqueName: \"kubernetes.io/projected/23d5bb0c-be70-41ce-a72b-d689f616127d-kube-api-access-bcwsr\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030949 kubelet[2563]: I0117 12:20:21.030881 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-host-proc-sys-net\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.030949 kubelet[2563]: I0117 12:20:21.030910 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-lib-modules\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.031874 kubelet[2563]: I0117 12:20:21.031211 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-bpf-maps\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.031874 kubelet[2563]: I0117 12:20:21.031249 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-cni-path\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.031874 kubelet[2563]: I0117 12:20:21.031288 2563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23d5bb0c-be70-41ce-a72b-d689f616127d-etc-cni-netd\") pod \"cilium-njklk\" (UID: \"23d5bb0c-be70-41ce-a72b-d689f616127d\") " pod="kube-system/cilium-njklk" Jan 17 12:20:21.040803 sshd[4332]: Accepted publickey for core from 139.178.68.195 port 59646 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:21.046216 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:21.061774 systemd-logind[1446]: New session 26 of user core. Jan 17 12:20:21.073577 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:20:21.177892 sshd[4332]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:21.249647 systemd[1]: sshd@25-209.38.133.237:22-139.178.68.195:59646.service: Deactivated successfully. Jan 17 12:20:21.255192 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:20:21.261864 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:20:21.270513 systemd[1]: Started sshd@26-209.38.133.237:22-139.178.68.195:59652.service - OpenSSH per-connection server daemon (139.178.68.195:59652). Jan 17 12:20:21.276783 systemd-logind[1446]: Removed session 26. Jan 17 12:20:21.352408 kubelet[2563]: E0117 12:20:21.352343 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:21.354902 containerd[1469]: time="2025-01-17T12:20:21.353573753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njklk,Uid:23d5bb0c-be70-41ce-a72b-d689f616127d,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:21.373071 sshd[4346]: Accepted publickey for core from 139.178.68.195 port 59652 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:21.372807 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:21.390907 systemd-logind[1446]: New session 27 of user core. Jan 17 12:20:21.400190 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:20:21.413458 containerd[1469]: time="2025-01-17T12:20:21.413194174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:21.413458 containerd[1469]: time="2025-01-17T12:20:21.413306203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:21.413458 containerd[1469]: time="2025-01-17T12:20:21.413347938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:21.414311 containerd[1469]: time="2025-01-17T12:20:21.413535554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:21.451936 systemd[1]: Started cri-containerd-f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f.scope - libcontainer container f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f. Jan 17 12:20:21.518422 containerd[1469]: time="2025-01-17T12:20:21.516991153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njklk,Uid:23d5bb0c-be70-41ce-a72b-d689f616127d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\"" Jan 17 12:20:21.524821 kubelet[2563]: E0117 12:20:21.523336 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:21.539950 containerd[1469]: time="2025-01-17T12:20:21.539869132Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:20:21.582129 containerd[1469]: time="2025-01-17T12:20:21.581585492Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364\"" Jan 17 12:20:21.585303 containerd[1469]: time="2025-01-17T12:20:21.584189369Z" level=info msg="StartContainer for \"12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364\"" Jan 17 12:20:21.663519 systemd[1]: Started cri-containerd-12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364.scope - libcontainer container 12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364. Jan 17 12:20:21.738969 containerd[1469]: time="2025-01-17T12:20:21.738893155Z" level=info msg="StartContainer for \"12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364\" returns successfully" Jan 17 12:20:21.751378 systemd[1]: cri-containerd-12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364.scope: Deactivated successfully. Jan 17 12:20:21.827204 containerd[1469]: time="2025-01-17T12:20:21.827080546Z" level=info msg="shim disconnected" id=12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364 namespace=k8s.io Jan 17 12:20:21.827204 containerd[1469]: time="2025-01-17T12:20:21.827197003Z" level=warning msg="cleaning up after shim disconnected" id=12f490696f87ca2886c594f4c267dda4708b71247369bb36271957daf67da364 namespace=k8s.io Jan 17 12:20:21.827204 containerd[1469]: time="2025-01-17T12:20:21.827211277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:21.849215 containerd[1469]: time="2025-01-17T12:20:21.848482563Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:20:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:20:22.601217 kubelet[2563]: E0117 12:20:22.600798 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:22.605901 containerd[1469]: time="2025-01-17T12:20:22.605644382Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:20:22.635377 containerd[1469]: time="2025-01-17T12:20:22.635016088Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b\"" Jan 17 12:20:22.639869 containerd[1469]: time="2025-01-17T12:20:22.639791211Z" level=info msg="StartContainer for \"18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b\"" Jan 17 12:20:22.707568 systemd[1]: Started cri-containerd-18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b.scope - libcontainer container 18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b. Jan 17 12:20:22.787404 containerd[1469]: time="2025-01-17T12:20:22.786772443Z" level=info msg="StartContainer for \"18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b\" returns successfully" Jan 17 12:20:22.788486 systemd[1]: cri-containerd-18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b.scope: Deactivated successfully. Jan 17 12:20:22.831529 containerd[1469]: time="2025-01-17T12:20:22.831303584Z" level=info msg="shim disconnected" id=18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b namespace=k8s.io Jan 17 12:20:22.831529 containerd[1469]: time="2025-01-17T12:20:22.831424115Z" level=warning msg="cleaning up after shim disconnected" id=18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b namespace=k8s.io Jan 17 12:20:22.831529 containerd[1469]: time="2025-01-17T12:20:22.831439058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:22.882510 kubelet[2563]: E0117 12:20:22.881359 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:23.156927 systemd[1]: run-containerd-runc-k8s.io-18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b-runc.ySamNP.mount: Deactivated successfully. Jan 17 12:20:23.157073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d788753da0a92f934a0e19f7bbe86e871b2b1ee46b74b1abddb6353b55249b-rootfs.mount: Deactivated successfully. Jan 17 12:20:23.189333 kubelet[2563]: E0117 12:20:23.189274 2563 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:20:23.605406 kubelet[2563]: E0117 12:20:23.605354 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:23.611508 containerd[1469]: time="2025-01-17T12:20:23.610758893Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:20:23.653186 containerd[1469]: time="2025-01-17T12:20:23.650393452Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c\"" Jan 17 12:20:23.656634 containerd[1469]: time="2025-01-17T12:20:23.655505451Z" level=info msg="StartContainer for \"9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c\"" Jan 17 12:20:23.726548 systemd[1]: Started cri-containerd-9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c.scope - libcontainer container 9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c. Jan 17 12:20:23.783211 containerd[1469]: time="2025-01-17T12:20:23.783101807Z" level=info msg="StartContainer for \"9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c\" returns successfully" Jan 17 12:20:23.788387 systemd[1]: cri-containerd-9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c.scope: Deactivated successfully. Jan 17 12:20:23.830148 containerd[1469]: time="2025-01-17T12:20:23.830027414Z" level=info msg="shim disconnected" id=9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c namespace=k8s.io Jan 17 12:20:23.830446 containerd[1469]: time="2025-01-17T12:20:23.830137751Z" level=warning msg="cleaning up after shim disconnected" id=9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c namespace=k8s.io Jan 17 12:20:23.830446 containerd[1469]: time="2025-01-17T12:20:23.830227945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:24.155539 systemd[1]: run-containerd-runc-k8s.io-9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c-runc.ebTXbe.mount: Deactivated successfully. Jan 17 12:20:24.155842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aaae4ceee988d6b364761323233415505a308b2cbf040ee2aefc155c31cb33c-rootfs.mount: Deactivated successfully. Jan 17 12:20:24.612264 kubelet[2563]: E0117 12:20:24.612195 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:24.618535 containerd[1469]: time="2025-01-17T12:20:24.618177031Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:20:24.651356 containerd[1469]: time="2025-01-17T12:20:24.651256964Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6\"" Jan 17 12:20:24.652613 containerd[1469]: time="2025-01-17T12:20:24.652566179Z" level=info msg="StartContainer for \"a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6\"" Jan 17 12:20:24.711450 systemd[1]: Started cri-containerd-a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6.scope - libcontainer container a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6. Jan 17 12:20:24.764832 systemd[1]: cri-containerd-a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6.scope: Deactivated successfully. Jan 17 12:20:24.768928 containerd[1469]: time="2025-01-17T12:20:24.768740867Z" level=info msg="StartContainer for \"a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6\" returns successfully" Jan 17 12:20:24.806068 containerd[1469]: time="2025-01-17T12:20:24.805961962Z" level=info msg="shim disconnected" id=a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6 namespace=k8s.io Jan 17 12:20:24.807084 containerd[1469]: time="2025-01-17T12:20:24.806741500Z" level=warning msg="cleaning up after shim disconnected" id=a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6 namespace=k8s.io Jan 17 12:20:24.807084 containerd[1469]: time="2025-01-17T12:20:24.806835986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:25.155971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a709c65595674c595e90052b0da643e1fcae0cfef03e5f5f00a91d73173387c6-rootfs.mount: Deactivated successfully. Jan 17 12:20:25.377969 kubelet[2563]: I0117 12:20:25.376278 2563 setters.go:580] "Node became not ready" node="ci-4081.3.0-8-018bcc3779" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:20:25Z","lastTransitionTime":"2025-01-17T12:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:20:25.618639 kubelet[2563]: E0117 12:20:25.618383 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:25.626032 containerd[1469]: time="2025-01-17T12:20:25.624574133Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:20:25.666503 containerd[1469]: time="2025-01-17T12:20:25.666418872Z" level=info msg="CreateContainer within sandbox \"f3e862317ca7a39ab82e5276b5c7fc0f33ccdd0dab028e4a59d8e21483dee57f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e\"" Jan 17 12:20:25.671216 containerd[1469]: time="2025-01-17T12:20:25.671056204Z" level=info msg="StartContainer for \"a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e\"" Jan 17 12:20:25.733585 systemd[1]: Started cri-containerd-a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e.scope - libcontainer container a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e. Jan 17 12:20:25.803219 containerd[1469]: time="2025-01-17T12:20:25.802493880Z" level=info msg="StartContainer for \"a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e\" returns successfully" Jan 17 12:20:26.159338 systemd[1]: run-containerd-runc-k8s.io-a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e-runc.PRDet0.mount: Deactivated successfully. Jan 17 12:20:26.428253 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:20:26.633476 kubelet[2563]: E0117 12:20:26.633019 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:27.632487 kubelet[2563]: E0117 12:20:27.632336 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:28.213976 systemd[1]: run-containerd-runc-k8s.io-a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e-runc.wLy2OP.mount: Deactivated successfully. Jan 17 12:20:28.636727 kubelet[2563]: E0117 12:20:28.636672 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:30.244821 systemd-networkd[1371]: lxc_health: Link UP Jan 17 12:20:30.261503 systemd-networkd[1371]: lxc_health: Gained carrier Jan 17 12:20:31.358881 kubelet[2563]: E0117 12:20:31.358643 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:31.413118 kubelet[2563]: I0117 12:20:31.413007 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-njklk" podStartSLOduration=11.412978935 podStartE2EDuration="11.412978935s" podCreationTimestamp="2025-01-17 12:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:26.673102419 +0000 UTC m=+103.988286552" watchObservedRunningTime="2025-01-17 12:20:31.412978935 +0000 UTC m=+108.728163068" Jan 17 12:20:31.648810 kubelet[2563]: E0117 12:20:31.648307 2563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:31.963285 systemd-networkd[1371]: lxc_health: Gained IPv6LL Jan 17 12:20:32.831491 systemd[1]: run-containerd-runc-k8s.io-a1ccafe285054bfe6b88deb26f6309974900966da2810dcdb66db272298bc72e-runc.7AEkps.mount: Deactivated successfully. Jan 17 12:20:37.329683 sshd[4346]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:37.337715 systemd[1]: sshd@26-209.38.133.237:22-139.178.68.195:59652.service: Deactivated successfully. Jan 17 12:20:37.341856 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:20:37.343493 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:20:37.345383 systemd-logind[1446]: Removed session 27.