Jul 2 00:19:06.882873 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:19:06.882907 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:06.882921 kernel: BIOS-provided physical RAM map: Jul 2 00:19:06.882932 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:19:06.882941 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:19:06.882951 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:19:06.882965 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 2 00:19:06.882974 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 2 00:19:06.882984 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:19:06.882993 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:19:06.883003 kernel: NX (Execute Disable) protection: active Jul 2 00:19:06.883014 kernel: APIC: Static calls initialized Jul 2 00:19:06.883025 kernel: SMBIOS 2.8 present. Jul 2 00:19:06.883034 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 00:19:06.883042 kernel: Hypervisor detected: KVM Jul 2 00:19:06.883053 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:19:06.883061 kernel: kvm-clock: using sched offset of 3059316995 cycles Jul 2 00:19:06.883073 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:19:06.883081 kernel: tsc: Detected 2494.138 MHz processor Jul 2 00:19:06.883090 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:19:06.883100 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:19:06.883108 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 2 00:19:06.883116 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:19:06.883124 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:19:06.883134 kernel: ACPI: Early table checksum verification disabled Jul 2 00:19:06.883142 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 2 00:19:06.883150 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883157 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883164 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883172 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:19:06.883179 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883187 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883194 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883205 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:06.883212 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 00:19:06.883220 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 00:19:06.883227 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:19:06.883235 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 00:19:06.883242 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 00:19:06.883249 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 00:19:06.883261 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 00:19:06.883272 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:19:06.883280 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:19:06.883288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 00:19:06.883300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 00:19:06.883311 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jul 2 00:19:06.883323 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jul 2 00:19:06.883339 kernel: Zone ranges: Jul 2 00:19:06.883351 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:19:06.883364 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 2 00:19:06.883378 kernel: Normal empty Jul 2 00:19:06.883387 kernel: Movable zone start for each node Jul 2 00:19:06.883395 kernel: Early memory node ranges Jul 2 00:19:06.883403 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:19:06.883410 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 2 00:19:06.883419 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 2 00:19:06.883430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:19:06.883438 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:19:06.883446 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 2 00:19:06.883454 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:19:06.883462 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:19:06.883470 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:19:06.883478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:19:06.883486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:19:06.883494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:19:06.883505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:19:06.883515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:19:06.883524 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:19:06.883531 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:19:06.883539 kernel: TSC deadline timer available Jul 2 00:19:06.883547 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:19:06.883555 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:19:06.883564 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:19:06.883572 kernel: Booting paravirtualized kernel on KVM Jul 2 00:19:06.883582 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:19:06.883593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:19:06.883601 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:19:06.883609 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:19:06.883617 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:19:06.883625 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:19:06.883634 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:06.883643 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:19:06.883650 kernel: random: crng init done Jul 2 00:19:06.883662 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:19:06.883670 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:19:06.883678 kernel: Fallback order for Node 0: 0 Jul 2 00:19:06.883686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jul 2 00:19:06.885847 kernel: Policy zone: DMA32 Jul 2 00:19:06.885873 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:19:06.885883 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Jul 2 00:19:06.885891 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:19:06.885905 kernel: Kernel/User page tables isolation: enabled Jul 2 00:19:06.885914 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:19:06.885922 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:19:06.885930 kernel: Dynamic Preempt: voluntary Jul 2 00:19:06.885938 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:19:06.885947 kernel: rcu: RCU event tracing is enabled. Jul 2 00:19:06.885956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:19:06.885964 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:19:06.885972 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:19:06.885981 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:19:06.885991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:19:06.886000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:19:06.886008 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:19:06.886016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:19:06.886024 kernel: Console: colour VGA+ 80x25 Jul 2 00:19:06.886032 kernel: printk: console [tty0] enabled Jul 2 00:19:06.886040 kernel: printk: console [ttyS0] enabled Jul 2 00:19:06.886048 kernel: ACPI: Core revision 20230628 Jul 2 00:19:06.886057 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:19:06.886068 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:19:06.886076 kernel: x2apic enabled Jul 2 00:19:06.886084 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:19:06.886099 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:19:06.886107 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 2 00:19:06.886116 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jul 2 00:19:06.886124 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:19:06.886132 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:19:06.886151 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:19:06.886160 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:19:06.886173 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:19:06.886189 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:19:06.886202 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 00:19:06.886215 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:19:06.886228 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:19:06.886242 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 00:19:06.886257 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:19:06.886275 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:19:06.886284 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:19:06.886293 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:19:06.886301 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:19:06.886310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 00:19:06.886319 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:19:06.886328 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:19:06.886336 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:19:06.886348 kernel: SELinux: Initializing. Jul 2 00:19:06.886357 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:19:06.886365 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:19:06.886374 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 00:19:06.886383 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:06.886392 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:06.886401 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:06.886409 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 00:19:06.886418 kernel: signal: max sigframe size: 1776 Jul 2 00:19:06.886429 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:19:06.886438 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:19:06.886447 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:19:06.886455 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:19:06.886464 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:19:06.886473 kernel: .... node #0, CPUs: #1 Jul 2 00:19:06.886481 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:19:06.886490 kernel: smpboot: Max logical packages: 1 Jul 2 00:19:06.886499 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jul 2 00:19:06.886511 kernel: devtmpfs: initialized Jul 2 00:19:06.886519 kernel: x86/mm: Memory block size: 128MB Jul 2 00:19:06.886528 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:19:06.886537 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:19:06.886545 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:19:06.886554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:19:06.886563 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:19:06.886574 kernel: audit: type=2000 audit(1719879545.903:1): state=initialized audit_enabled=0 res=1 Jul 2 00:19:06.886583 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:19:06.886595 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:19:06.886604 kernel: cpuidle: using governor menu Jul 2 00:19:06.886612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:19:06.886621 kernel: dca service started, version 1.12.1 Jul 2 00:19:06.886630 kernel: PCI: Using configuration type 1 for base access Jul 2 00:19:06.886638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:19:06.886647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:19:06.886655 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:19:06.886664 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:19:06.886675 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:19:06.886684 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:19:06.886693 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:19:06.886701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:19:06.886710 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:19:06.886718 kernel: ACPI: Interpreter enabled Jul 2 00:19:06.886727 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:19:06.886735 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:19:06.886744 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:19:06.886755 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:19:06.886764 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:19:06.886773 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:19:06.887020 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:19:06.887183 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:19:06.887316 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:19:06.887333 kernel: acpiphp: Slot [3] registered Jul 2 00:19:06.887347 kernel: acpiphp: Slot [4] registered Jul 2 00:19:06.887356 kernel: acpiphp: Slot [5] registered Jul 2 00:19:06.887365 kernel: acpiphp: Slot [6] registered Jul 2 00:19:06.887374 kernel: acpiphp: Slot [7] registered Jul 2 00:19:06.887382 kernel: acpiphp: Slot [8] registered Jul 2 00:19:06.887391 kernel: acpiphp: Slot [9] registered Jul 2 00:19:06.887399 kernel: acpiphp: Slot [10] registered Jul 2 00:19:06.887408 kernel: acpiphp: Slot [11] registered Jul 2 00:19:06.887416 kernel: acpiphp: Slot [12] registered Jul 2 00:19:06.887425 kernel: acpiphp: Slot [13] registered Jul 2 00:19:06.887437 kernel: acpiphp: Slot [14] registered Jul 2 00:19:06.887445 kernel: acpiphp: Slot [15] registered Jul 2 00:19:06.887454 kernel: acpiphp: Slot [16] registered Jul 2 00:19:06.887463 kernel: acpiphp: Slot [17] registered Jul 2 00:19:06.887471 kernel: acpiphp: Slot [18] registered Jul 2 00:19:06.887480 kernel: acpiphp: Slot [19] registered Jul 2 00:19:06.887488 kernel: acpiphp: Slot [20] registered Jul 2 00:19:06.887497 kernel: acpiphp: Slot [21] registered Jul 2 00:19:06.887505 kernel: acpiphp: Slot [22] registered Jul 2 00:19:06.887517 kernel: acpiphp: Slot [23] registered Jul 2 00:19:06.887525 kernel: acpiphp: Slot [24] registered Jul 2 00:19:06.887534 kernel: acpiphp: Slot [25] registered Jul 2 00:19:06.887542 kernel: acpiphp: Slot [26] registered Jul 2 00:19:06.887551 kernel: acpiphp: Slot [27] registered Jul 2 00:19:06.887560 kernel: acpiphp: Slot [28] registered Jul 2 00:19:06.887568 kernel: acpiphp: Slot [29] registered Jul 2 00:19:06.887577 kernel: acpiphp: Slot [30] registered Jul 2 00:19:06.887585 kernel: acpiphp: Slot [31] registered Jul 2 00:19:06.887594 kernel: PCI host bridge to bus 0000:00 Jul 2 00:19:06.887715 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:19:06.890022 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:19:06.890154 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:19:06.890241 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:19:06.890325 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:19:06.890412 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:19:06.890572 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:19:06.890683 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:19:06.890789 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:19:06.891963 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 00:19:06.892089 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:19:06.892217 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:19:06.892350 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:19:06.892486 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:19:06.892643 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 00:19:06.892769 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 00:19:06.893189 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:19:06.893297 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:19:06.893391 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:19:06.893533 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:19:06.893706 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:19:06.893875 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:19:06.894031 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 00:19:06.894178 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:19:06.894324 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:19:06.894472 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:19:06.894574 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 00:19:06.894729 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 00:19:06.895965 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:19:06.896122 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:19:06.896222 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 00:19:06.896334 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 00:19:06.896429 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:19:06.896561 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 00:19:06.896656 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 00:19:06.896748 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 00:19:06.897948 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:19:06.898106 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:19:06.898214 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:19:06.898322 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 00:19:06.898424 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:19:06.898562 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:19:06.898658 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 00:19:06.899864 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 00:19:06.900010 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 00:19:06.900155 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:19:06.900252 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 00:19:06.900354 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 00:19:06.900366 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:19:06.900377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:19:06.900391 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:19:06.900403 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:19:06.900416 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:19:06.900428 kernel: iommu: Default domain type: Translated Jul 2 00:19:06.900447 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:19:06.900459 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:19:06.900468 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:19:06.900477 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:19:06.900485 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 2 00:19:06.900588 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:19:06.900682 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:19:06.901896 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:19:06.901924 kernel: vgaarb: loaded Jul 2 00:19:06.901934 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:19:06.901944 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:19:06.901953 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:19:06.901962 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:19:06.901971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:19:06.901980 kernel: pnp: PnP ACPI init Jul 2 00:19:06.901989 kernel: pnp: PnP ACPI: found 4 devices Jul 2 00:19:06.901998 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:19:06.902010 kernel: NET: Registered PF_INET protocol family Jul 2 00:19:06.902019 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:19:06.902028 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:19:06.902037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:19:06.902050 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:19:06.902062 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:19:06.902074 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:19:06.902086 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:19:06.902098 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:19:06.902113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:19:06.902124 kernel: NET: Registered PF_XDP protocol family Jul 2 00:19:06.902263 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:19:06.902378 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:19:06.902466 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:19:06.902550 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:19:06.902633 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:19:06.902733 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:19:06.902853 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:19:06.902867 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:19:06.902962 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30908 usecs Jul 2 00:19:06.902974 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:19:06.902983 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:19:06.902992 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 2 00:19:06.903001 kernel: Initialise system trusted keyrings Jul 2 00:19:06.903010 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:19:06.903023 kernel: Key type asymmetric registered Jul 2 00:19:06.903031 kernel: Asymmetric key parser 'x509' registered Jul 2 00:19:06.903040 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:19:06.903049 kernel: io scheduler mq-deadline registered Jul 2 00:19:06.903062 kernel: io scheduler kyber registered Jul 2 00:19:06.903074 kernel: io scheduler bfq registered Jul 2 00:19:06.903085 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:19:06.903097 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:19:06.903110 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:19:06.903122 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:19:06.903138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:19:06.903150 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:19:06.903161 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:19:06.903173 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:19:06.903184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:19:06.903383 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 00:19:06.903401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:19:06.903529 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 00:19:06.903649 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T00:19:06 UTC (1719879546) Jul 2 00:19:06.903753 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:19:06.903765 kernel: intel_pstate: CPU model not supported Jul 2 00:19:06.903774 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:19:06.903783 kernel: Segment Routing with IPv6 Jul 2 00:19:06.903792 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:19:06.905857 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:19:06.905881 kernel: Key type dns_resolver registered Jul 2 00:19:06.905897 kernel: IPI shorthand broadcast: enabled Jul 2 00:19:06.905908 kernel: sched_clock: Marking stable (825003535, 83232687)->(996950311, -88714089) Jul 2 00:19:06.905917 kernel: registered taskstats version 1 Jul 2 00:19:06.905926 kernel: Loading compiled-in X.509 certificates Jul 2 00:19:06.905935 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:19:06.905943 kernel: Key type .fscrypt registered Jul 2 00:19:06.905952 kernel: Key type fscrypt-provisioning registered Jul 2 00:19:06.905961 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:19:06.905969 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:19:06.905981 kernel: ima: No architecture policies found Jul 2 00:19:06.905990 kernel: clk: Disabling unused clocks Jul 2 00:19:06.905999 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:19:06.906013 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:19:06.906026 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:19:06.906063 kernel: Run /init as init process Jul 2 00:19:06.906081 kernel: with arguments: Jul 2 00:19:06.906096 kernel: /init Jul 2 00:19:06.906109 kernel: with environment: Jul 2 00:19:06.906125 kernel: HOME=/ Jul 2 00:19:06.906138 kernel: TERM=linux Jul 2 00:19:06.906152 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:19:06.906170 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:19:06.906193 systemd[1]: Detected virtualization kvm. Jul 2 00:19:06.906210 systemd[1]: Detected architecture x86-64. Jul 2 00:19:06.906225 systemd[1]: Running in initrd. Jul 2 00:19:06.906240 systemd[1]: No hostname configured, using default hostname. Jul 2 00:19:06.906255 systemd[1]: Hostname set to . Jul 2 00:19:06.906265 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:19:06.906274 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:19:06.906284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:06.906293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:06.906304 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:19:06.906314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:19:06.906324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:19:06.906336 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:19:06.906347 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:19:06.906357 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:19:06.906367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:06.906376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:06.906386 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:19:06.906395 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:19:06.906408 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:19:06.906417 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:19:06.906429 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:19:06.906439 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:19:06.906449 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:19:06.906461 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:19:06.906471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:06.906481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:06.906490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:06.906529 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:19:06.906540 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:19:06.906550 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:19:06.906559 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:19:06.906569 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:19:06.906582 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:19:06.906591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:19:06.906601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:06.906611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:19:06.906620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:06.906674 systemd-journald[183]: Collecting audit messages is disabled. Jul 2 00:19:06.906700 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:19:06.906711 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:19:06.906725 systemd-journald[183]: Journal started Jul 2 00:19:06.906747 systemd-journald[183]: Runtime Journal (/run/log/journal/5c570bce581047dcadf613b223da909d) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:19:06.892070 systemd-modules-load[184]: Inserted module 'overlay' Jul 2 00:19:06.937231 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:19:06.937262 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:19:06.937277 kernel: Bridge firewalling registered Jul 2 00:19:06.937852 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 2 00:19:06.937980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:19:06.940279 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:06.940768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:06.947973 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:06.949942 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:19:06.951975 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:19:06.964196 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:19:06.974559 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:06.976965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:06.982972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:19:06.983542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:06.984160 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:06.989958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:19:07.003633 dracut-cmdline[217]: dracut-dracut-053 Jul 2 00:19:07.007356 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:07.022069 systemd-resolved[220]: Positive Trust Anchors: Jul 2 00:19:07.022084 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:19:07.022122 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:19:07.025161 systemd-resolved[220]: Defaulting to hostname 'linux'. Jul 2 00:19:07.026304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:19:07.028478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:07.102866 kernel: SCSI subsystem initialized Jul 2 00:19:07.116852 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:19:07.132836 kernel: iscsi: registered transport (tcp) Jul 2 00:19:07.160837 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:19:07.160920 kernel: QLogic iSCSI HBA Driver Jul 2 00:19:07.207170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:19:07.212018 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:19:07.243521 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:19:07.243599 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:19:07.244620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:19:07.300867 kernel: raid6: avx2x4 gen() 13839 MB/s Jul 2 00:19:07.317881 kernel: raid6: avx2x2 gen() 14748 MB/s Jul 2 00:19:07.335034 kernel: raid6: avx2x1 gen() 11288 MB/s Jul 2 00:19:07.335109 kernel: raid6: using algorithm avx2x2 gen() 14748 MB/s Jul 2 00:19:07.353031 kernel: raid6: .... xor() 11829 MB/s, rmw enabled Jul 2 00:19:07.353135 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:19:07.391864 kernel: xor: automatically using best checksumming function avx Jul 2 00:19:07.674840 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:19:07.688365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:19:07.696064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:07.721306 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jul 2 00:19:07.726330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:07.733092 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:19:07.751919 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jul 2 00:19:07.787664 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:19:07.793968 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:19:07.853738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:07.863367 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:19:07.876610 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:19:07.877881 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:19:07.879437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:07.880854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:19:07.887207 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:19:07.922101 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:19:07.940843 kernel: scsi host0: Virtio SCSI HBA Jul 2 00:19:07.952842 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:19:07.956870 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 2 00:19:08.063975 kernel: ACPI: bus type USB registered Jul 2 00:19:08.064006 kernel: usbcore: registered new interface driver usbfs Jul 2 00:19:08.064027 kernel: usbcore: registered new interface driver hub Jul 2 00:19:08.064048 kernel: usbcore: registered new device driver usb Jul 2 00:19:08.064068 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:19:08.064087 kernel: AES CTR mode by8 optimization enabled Jul 2 00:19:08.064105 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 00:19:08.064305 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:19:08.064326 kernel: GPT:9289727 != 125829119 Jul 2 00:19:08.064342 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:19:08.064358 kernel: GPT:9289727 != 125829119 Jul 2 00:19:08.064374 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:19:08.064390 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:08.064407 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 00:19:08.065608 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 00:19:08.065830 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 00:19:08.066029 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 2 00:19:08.066208 kernel: hub 1-0:1.0: USB hub found Jul 2 00:19:08.066421 kernel: hub 1-0:1.0: 2 ports detected Jul 2 00:19:08.066606 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 2 00:19:08.090955 kernel: libata version 3.00 loaded. Jul 2 00:19:08.090984 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jul 2 00:19:08.091168 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:19:08.091351 kernel: scsi host1: ata_piix Jul 2 00:19:08.091519 kernel: scsi host2: ata_piix Jul 2 00:19:08.091696 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 00:19:08.091733 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 00:19:08.035613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:19:08.035815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:08.036428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:08.036786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:08.036967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:08.037378 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:08.045368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:08.125298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:08.130093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:08.163389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:08.260864 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jul 2 00:19:08.263491 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (466) Jul 2 00:19:08.277140 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:19:08.281780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:19:08.287693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:19:08.291657 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:19:08.292223 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:19:08.300074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:19:08.305428 disk-uuid[554]: Primary Header is updated. Jul 2 00:19:08.305428 disk-uuid[554]: Secondary Entries is updated. Jul 2 00:19:08.305428 disk-uuid[554]: Secondary Header is updated. Jul 2 00:19:08.316852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:08.322830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:09.331008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:09.331075 disk-uuid[555]: The operation has completed successfully. Jul 2 00:19:09.379703 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:19:09.379871 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:19:09.393061 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:19:09.396592 sh[568]: Success Jul 2 00:19:09.414918 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:19:09.462587 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:19:09.470980 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:19:09.475589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:19:09.499292 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:19:09.499367 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:09.499381 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:19:09.500362 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:19:09.501083 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:19:09.509412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:19:09.510496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:19:09.519092 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:19:09.523057 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:19:09.530600 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:09.530663 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:09.530678 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:09.534832 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:09.548257 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:09.547904 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:19:09.555347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:19:09.563025 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:19:09.630277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:19:09.650189 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:19:09.693983 ignition[658]: Ignition 2.18.0 Jul 2 00:19:09.693996 ignition[658]: Stage: fetch-offline Jul 2 00:19:09.694064 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:09.694075 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:09.696324 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:19:09.694233 ignition[658]: parsed url from cmdline: "" Jul 2 00:19:09.694237 ignition[658]: no config URL provided Jul 2 00:19:09.694243 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:19:09.694251 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:19:09.694257 ignition[658]: failed to fetch config: resource requires networking Jul 2 00:19:09.701437 systemd-networkd[753]: lo: Link UP Jul 2 00:19:09.694512 ignition[658]: Ignition finished successfully Jul 2 00:19:09.701442 systemd-networkd[753]: lo: Gained carrier Jul 2 00:19:09.704283 systemd-networkd[753]: Enumeration completed Jul 2 00:19:09.704692 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:19:09.704696 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 00:19:09.704929 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:19:09.706273 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:19:09.706276 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:19:09.706524 systemd[1]: Reached target network.target - Network. Jul 2 00:19:09.708370 systemd-networkd[753]: eth0: Link UP Jul 2 00:19:09.708376 systemd-networkd[753]: eth0: Gained carrier Jul 2 00:19:09.708388 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:19:09.714064 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:19:09.715283 systemd-networkd[753]: eth1: Link UP Jul 2 00:19:09.715295 systemd-networkd[753]: eth1: Gained carrier Jul 2 00:19:09.715311 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:19:09.728906 systemd-networkd[753]: eth0: DHCPv4 address 146.190.126.73/20, gateway 146.190.112.1 acquired from 169.254.169.253 Jul 2 00:19:09.741638 ignition[762]: Ignition 2.18.0 Jul 2 00:19:09.741656 ignition[762]: Stage: fetch Jul 2 00:19:09.741916 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:09.741928 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:09.744581 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Jul 2 00:19:09.742662 ignition[762]: parsed url from cmdline: "" Jul 2 00:19:09.742667 ignition[762]: no config URL provided Jul 2 00:19:09.742674 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:19:09.742685 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:19:09.742706 ignition[762]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 00:19:09.759306 ignition[762]: GET result: OK Jul 2 00:19:09.759417 ignition[762]: parsing config with SHA512: 0aa2f8de40988895e5bb463da26bfd8d20fc35cbaf41a0a63938a948075ccbe73d03480e4c9a8341cd2f6b580a16ed49ef6c47c6d9dd570a647d420ee3e00632 Jul 2 00:19:09.763656 unknown[762]: fetched base config from "system" Jul 2 00:19:09.763667 unknown[762]: fetched base config from "system" Jul 2 00:19:09.764136 ignition[762]: fetch: fetch complete Jul 2 00:19:09.763673 unknown[762]: fetched user config from "digitalocean" Jul 2 00:19:09.764141 ignition[762]: fetch: fetch passed Jul 2 00:19:09.766278 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:19:09.764188 ignition[762]: Ignition finished successfully Jul 2 00:19:09.773039 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:19:09.802469 ignition[770]: Ignition 2.18.0 Jul 2 00:19:09.802481 ignition[770]: Stage: kargs Jul 2 00:19:09.802712 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:09.802724 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:09.803946 ignition[770]: kargs: kargs passed Jul 2 00:19:09.804008 ignition[770]: Ignition finished successfully Jul 2 00:19:09.805097 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:19:09.813100 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:19:09.827657 ignition[777]: Ignition 2.18.0 Jul 2 00:19:09.827675 ignition[777]: Stage: disks Jul 2 00:19:09.827976 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:09.827988 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:09.828939 ignition[777]: disks: disks passed Jul 2 00:19:09.828993 ignition[777]: Ignition finished successfully Jul 2 00:19:09.830004 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:19:09.830953 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:19:09.831330 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:19:09.834977 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:19:09.835598 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:19:09.836357 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:19:09.841993 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:19:09.858303 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:19:09.861885 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:19:09.867966 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:19:09.984815 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:19:09.985689 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:19:09.987105 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:19:09.992958 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:19:09.995959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:19:09.998001 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 2 00:19:10.004853 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (794) Jul 2 00:19:10.007422 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:10.007485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:10.007499 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:10.012063 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:19:10.012502 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:19:10.012538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:19:10.018912 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:10.020470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:19:10.023204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:19:10.033081 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:19:10.088216 coreos-metadata[797]: Jul 02 00:19:10.088 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:10.099925 coreos-metadata[796]: Jul 02 00:19:10.099 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:10.101109 coreos-metadata[797]: Jul 02 00:19:10.099 INFO Fetch successful Jul 2 00:19:10.106829 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:19:10.108318 coreos-metadata[797]: Jul 02 00:19:10.108 INFO wrote hostname ci-3975.1.1-0-70f2b56eaa to /sysroot/etc/hostname Jul 2 00:19:10.109841 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:19:10.112368 coreos-metadata[796]: Jul 02 00:19:10.112 INFO Fetch successful Jul 2 00:19:10.116562 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:19:10.118575 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 00:19:10.118672 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 2 00:19:10.124733 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:19:10.129434 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:19:10.220527 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:19:10.227003 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:19:10.231989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:19:10.239821 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:10.264900 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:19:10.270877 ignition[919]: INFO : Ignition 2.18.0 Jul 2 00:19:10.272911 ignition[919]: INFO : Stage: mount Jul 2 00:19:10.272911 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:10.272911 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:10.272911 ignition[919]: INFO : mount: mount passed Jul 2 00:19:10.274605 ignition[919]: INFO : Ignition finished successfully Jul 2 00:19:10.275889 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:19:10.280934 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:19:10.499322 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:19:10.506074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:19:10.525987 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jul 2 00:19:10.526063 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:10.527139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:10.528058 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:10.531843 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:10.533933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:19:10.565522 ignition[948]: INFO : Ignition 2.18.0 Jul 2 00:19:10.565522 ignition[948]: INFO : Stage: files Jul 2 00:19:10.566986 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:10.566986 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:10.568295 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:19:10.568921 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:19:10.568921 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:19:10.572318 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:19:10.573056 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:19:10.573056 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:19:10.573025 unknown[948]: wrote ssh authorized keys file for user: core Jul 2 00:19:10.575337 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:19:10.576201 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:19:10.597344 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:19:10.670821 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:19:10.671556 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:19:10.671556 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:19:11.006219 systemd-networkd[753]: eth0: Gained IPv6LL Jul 2 00:19:11.126877 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:19:11.189054 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:19:11.189054 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:19:11.190916 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:19:11.195695 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:19:11.198005 systemd-networkd[753]: eth1: Gained IPv6LL Jul 2 00:19:11.570511 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:19:11.848696 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:19:11.848696 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:19:11.851199 ignition[948]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:19:11.851199 ignition[948]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:19:11.856172 ignition[948]: INFO : files: files passed Jul 2 00:19:11.856172 ignition[948]: INFO : Ignition finished successfully Jul 2 00:19:11.852881 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:19:11.867186 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:19:11.870129 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:19:11.871034 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:19:11.871153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:19:11.900405 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:11.900405 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:11.903562 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:11.905511 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:19:11.906859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:19:11.912433 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:19:11.964022 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:19:11.964180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:19:11.965797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:19:11.966733 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:19:11.967322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:19:11.971008 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:19:11.988757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:19:11.997111 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:19:12.010027 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:12.011393 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:12.012096 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:19:12.013213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:19:12.013392 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:19:12.014605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:19:12.015829 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:19:12.016663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:19:12.017505 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:19:12.018419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:19:12.019338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:19:12.020345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:19:12.021312 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:19:12.022287 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:19:12.023204 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:19:12.023983 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:19:12.024164 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:19:12.025252 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:12.025921 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:12.026787 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:19:12.026972 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:12.027913 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:19:12.028086 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:19:12.029182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:19:12.029349 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:19:12.030524 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:19:12.030665 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:19:12.031509 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:19:12.031696 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:19:12.039104 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:19:12.041072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:19:12.044077 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:19:12.044238 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:12.045701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:19:12.046306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:19:12.051403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:19:12.051957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:19:12.066963 ignition[1001]: INFO : Ignition 2.18.0 Jul 2 00:19:12.066963 ignition[1001]: INFO : Stage: umount Jul 2 00:19:12.067949 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:12.067949 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:12.070206 ignition[1001]: INFO : umount: umount passed Jul 2 00:19:12.070206 ignition[1001]: INFO : Ignition finished successfully Jul 2 00:19:12.070066 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:19:12.070221 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:19:12.074479 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:19:12.074635 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:19:12.077565 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:19:12.077645 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:19:12.080108 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:19:12.080177 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:19:12.080603 systemd[1]: Stopped target network.target - Network. Jul 2 00:19:12.080926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:19:12.080979 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:19:12.081419 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:19:12.081863 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:19:12.086900 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:12.088028 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:19:12.088841 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:19:12.089581 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:19:12.089644 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:19:12.090906 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:19:12.090954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:19:12.091298 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:19:12.091346 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:19:12.092120 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:19:12.092174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:19:12.093260 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:19:12.095003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:19:12.096643 systemd-networkd[753]: eth0: DHCPv6 lease lost Jul 2 00:19:12.096752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:19:12.097350 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:19:12.097442 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:19:12.098603 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:19:12.098712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:19:12.099868 systemd-networkd[753]: eth1: DHCPv6 lease lost Jul 2 00:19:12.101069 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:19:12.101200 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:19:12.102416 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:19:12.102471 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:12.107027 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:19:12.107370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:19:12.107430 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:19:12.107991 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:12.108624 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:19:12.114417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:19:12.120152 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:19:12.120876 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:12.123313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:19:12.124166 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:12.125098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:19:12.125542 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:12.126333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:19:12.126382 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:19:12.127785 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:19:12.127850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:19:12.128279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:19:12.128365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:12.134910 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:19:12.135321 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:19:12.135375 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:12.136203 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:19:12.136249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:12.137117 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:19:12.137165 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:12.137524 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:19:12.137559 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:19:12.137916 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:19:12.137956 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:12.138352 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:19:12.138398 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:12.138762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:12.138798 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:12.144213 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:19:12.144346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:19:12.147467 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:19:12.147575 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:19:12.149360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:19:12.155003 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:19:12.164846 systemd[1]: Switching root. Jul 2 00:19:12.253504 systemd-journald[183]: Journal stopped Jul 2 00:19:13.187862 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 2 00:19:13.187932 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:19:13.187982 kernel: SELinux: policy capability open_perms=1 Jul 2 00:19:13.188000 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:19:13.188013 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:19:13.188025 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:19:13.188037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:19:13.188049 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:19:13.188068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:19:13.188082 kernel: audit: type=1403 audit(1719879552.393:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:19:13.188096 systemd[1]: Successfully loaded SELinux policy in 44.319ms. Jul 2 00:19:13.188123 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.398ms. Jul 2 00:19:13.188138 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:19:13.188151 systemd[1]: Detected virtualization kvm. Jul 2 00:19:13.188164 systemd[1]: Detected architecture x86-64. Jul 2 00:19:13.188177 systemd[1]: Detected first boot. Jul 2 00:19:13.188189 systemd[1]: Hostname set to . Jul 2 00:19:13.188201 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:19:13.188214 zram_generator::config[1043]: No configuration found. Jul 2 00:19:13.188232 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:19:13.188244 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:19:13.188257 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:19:13.188269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:19:13.188283 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:19:13.188295 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:19:13.188308 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:19:13.188320 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:19:13.188336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:19:13.188360 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:19:13.188373 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:19:13.188386 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:19:13.188399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:13.188411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:13.188424 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:19:13.188437 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:19:13.188449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:19:13.188470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:19:13.188483 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:19:13.188496 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:13.188509 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:19:13.188522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:19:13.188534 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:19:13.188550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:19:13.188562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:13.188574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:19:13.188587 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:19:13.188599 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:19:13.188612 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:19:13.188624 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:19:13.188641 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:13.188660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:13.188681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:13.188704 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:19:13.188718 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:19:13.188731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:19:13.188743 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:19:13.188756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:13.188769 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:19:13.188781 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:19:13.188793 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:19:13.189865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:19:13.189889 systemd[1]: Reached target machines.target - Containers. Jul 2 00:19:13.189907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:19:13.189925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:13.189942 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:19:13.189959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:19:13.189977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:13.189993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:19:13.190014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:13.190039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:19:13.190061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:13.190079 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:19:13.190092 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:19:13.190104 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:19:13.190119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:19:13.190132 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:19:13.190144 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:19:13.190161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:19:13.190174 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:19:13.190187 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:19:13.190242 systemd-journald[1115]: Collecting audit messages is disabled. Jul 2 00:19:13.190269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:19:13.190283 systemd-journald[1115]: Journal started Jul 2 00:19:13.190311 systemd-journald[1115]: Runtime Journal (/run/log/journal/5c570bce581047dcadf613b223da909d) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:19:12.960982 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:19:12.982239 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:19:12.982647 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:19:13.192999 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:19:13.193050 systemd[1]: Stopped verity-setup.service. Jul 2 00:19:13.196831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:13.208715 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:19:13.208536 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:19:13.211418 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:19:13.212070 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:19:13.218689 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:19:13.221060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:19:13.221526 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:19:13.222159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:13.222850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:19:13.222989 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:19:13.224390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:13.224533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:13.226226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:13.226363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:13.233463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:19:13.240388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:13.240871 kernel: ACPI: bus type drm_connector registered Jul 2 00:19:13.241892 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:19:13.245021 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:19:13.245200 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:19:13.247827 kernel: loop: module loaded Jul 2 00:19:13.248903 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:13.249135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:13.264839 kernel: fuse: init (API version 7.39) Jul 2 00:19:13.268869 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:19:13.269053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:19:13.270749 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:19:13.277992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:19:13.282729 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:19:13.283904 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:19:13.283945 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:19:13.286661 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:19:13.293655 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:19:13.301109 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:19:13.302015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:13.310031 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:19:13.317033 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:19:13.317593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:13.320227 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:19:13.320736 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:13.328135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:19:13.342700 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:19:13.345272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:19:13.349871 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:19:13.350700 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:19:13.351265 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:19:13.353104 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:19:13.382954 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:19:13.383501 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:19:13.396025 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:19:13.409369 systemd-journald[1115]: Time spent on flushing to /var/log/journal/5c570bce581047dcadf613b223da909d is 64.698ms for 994 entries. Jul 2 00:19:13.409369 systemd-journald[1115]: System Journal (/var/log/journal/5c570bce581047dcadf613b223da909d) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:19:13.491144 systemd-journald[1115]: Received client request to flush runtime journal. Jul 2 00:19:13.491203 kernel: loop0: detected capacity change from 0 to 8 Jul 2 00:19:13.491222 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:19:13.491305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:19:13.491320 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:19:13.417290 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:19:13.422332 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:19:13.454211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:13.463073 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:19:13.489150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:13.498072 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:19:13.506690 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jul 2 00:19:13.506708 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jul 2 00:19:13.512940 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:19:13.522393 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:19:13.536084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:19:13.541886 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:19:13.598834 kernel: loop3: detected capacity change from 0 to 210664 Jul 2 00:19:13.612975 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:19:13.622861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:19:13.660859 kernel: loop4: detected capacity change from 0 to 8 Jul 2 00:19:13.665863 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:19:13.681939 kernel: loop6: detected capacity change from 0 to 139904 Jul 2 00:19:13.692413 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 2 00:19:13.693092 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 2 00:19:13.706138 kernel: loop7: detected capacity change from 0 to 210664 Jul 2 00:19:13.714254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:13.719474 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 2 00:19:13.720085 (sd-merge)[1189]: Merged extensions into '/usr'. Jul 2 00:19:13.732570 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:19:13.732598 systemd[1]: Reloading... Jul 2 00:19:13.850916 zram_generator::config[1211]: No configuration found. Jul 2 00:19:14.022058 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:19:14.095750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:14.187440 systemd[1]: Reloading finished in 454 ms. Jul 2 00:19:14.211667 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:19:14.212848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:19:14.226164 systemd[1]: Starting ensure-sysext.service... Jul 2 00:19:14.235793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:19:14.255034 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:19:14.255057 systemd[1]: Reloading... Jul 2 00:19:14.297228 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:19:14.297791 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:19:14.299275 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:19:14.299787 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 2 00:19:14.299907 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 2 00:19:14.306921 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:19:14.306936 systemd-tmpfiles[1258]: Skipping /boot Jul 2 00:19:14.334612 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:19:14.334630 systemd-tmpfiles[1258]: Skipping /boot Jul 2 00:19:14.384844 zram_generator::config[1280]: No configuration found. Jul 2 00:19:14.534751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:14.591554 systemd[1]: Reloading finished in 335 ms. Jul 2 00:19:14.610021 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:19:14.616555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:14.630135 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:14.635889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:19:14.646153 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:19:14.652062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:19:14.654948 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:14.657112 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:19:14.664255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.664481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:14.673130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:14.677123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:14.681110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:14.681993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:14.682132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.685298 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.685475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:14.685626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:14.685704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.689370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.689588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:14.698159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:19:14.698681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:14.699095 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.700876 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:19:14.702853 systemd[1]: Finished ensure-sysext.service. Jul 2 00:19:14.721093 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:19:14.724575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:19:14.740153 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:19:14.743287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:14.743458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:14.749291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:19:14.753504 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:19:14.755687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:19:14.759091 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jul 2 00:19:14.761150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:14.761938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:14.764145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:14.773559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:14.774892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:14.775666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:14.778125 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:19:14.778709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:19:14.793968 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:19:14.796396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:14.813054 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:19:14.823995 augenrules[1374]: No rules Jul 2 00:19:14.829930 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:14.839039 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:19:14.913834 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Jul 2 00:19:14.940094 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 2 00:19:14.940514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.940673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:14.949999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:14.957039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:14.962836 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:14.964350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:14.964390 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:19:14.964406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:14.973997 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:19:14.975790 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:19:14.985722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:14.987898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:14.993593 systemd-resolved[1333]: Positive Trust Anchors: Jul 2 00:19:14.993611 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:19:14.993646 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:19:14.997298 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:19:15.000238 systemd-resolved[1333]: Using system hostname 'ci-3975.1.1-0-70f2b56eaa'. Jul 2 00:19:15.002428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:19:15.003146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:15.005518 systemd-networkd[1368]: lo: Link UP Jul 2 00:19:15.005530 systemd-networkd[1368]: lo: Gained carrier Jul 2 00:19:15.009714 systemd-networkd[1368]: Enumeration completed Jul 2 00:19:15.014191 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:19:15.016532 systemd[1]: Reached target network.target - Network. Jul 2 00:19:15.024620 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 00:19:15.027827 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:19:15.032325 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 2 00:19:15.033106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:15.033257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:15.049384 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:15.052243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:15.052851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1367) Jul 2 00:19:15.054578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:15.054689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:15.066812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:19:15.076064 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:19:15.097910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:19:15.111473 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-3e:4d:36:c7:90:60.network. Jul 2 00:19:15.112396 systemd-networkd[1368]: eth1: Link UP Jul 2 00:19:15.112405 systemd-networkd[1368]: eth1: Gained carrier Jul 2 00:19:15.116079 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:15.131843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:19:15.138865 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:19:15.145543 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:19:15.153324 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-76:c3:04:45:62:0a.network. Jul 2 00:19:15.154391 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:15.154588 systemd-networkd[1368]: eth0: Link UP Jul 2 00:19:15.154592 systemd-networkd[1368]: eth0: Gained carrier Jul 2 00:19:15.157843 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:15.187873 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:19:15.231292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:15.242828 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:19:15.273830 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:19:15.273906 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:19:15.281935 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:19:15.282012 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:19:15.282028 kernel: [drm] features: -context_init Jul 2 00:19:15.283916 kernel: [drm] number of scanouts: 1 Jul 2 00:19:15.283976 kernel: [drm] number of cap sets: 0 Jul 2 00:19:15.287831 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:19:15.293876 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:19:15.298921 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:19:15.310832 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:19:15.316737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:15.317070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:15.336296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:15.346566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:15.346746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:15.424076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:15.479828 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:19:15.484353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:15.509428 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:19:15.525106 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:19:15.537276 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:19:15.566991 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:19:15.569747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:15.570333 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:19:15.570538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:19:15.570657 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:19:15.571216 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:19:15.571595 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:19:15.571676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:19:15.571796 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:19:15.571930 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:19:15.572344 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:19:15.573916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:19:15.575995 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:19:15.581866 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:19:15.584607 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:19:15.587790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:19:15.588344 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:19:15.588721 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:19:15.591975 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:19:15.592005 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:19:15.599031 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:19:15.601735 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:19:15.603894 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:19:15.613643 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:19:15.621917 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:19:15.627017 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:19:15.627479 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:19:15.630548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:19:15.642400 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:19:15.648329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:19:15.653035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:19:15.666045 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:19:15.668929 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:19:15.669413 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:19:15.678652 jq[1447]: false Jul 2 00:19:15.679181 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:19:15.686004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:19:15.689924 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:19:15.693317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:19:15.694883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:19:15.730281 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:19:15.731913 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:19:15.740483 dbus-daemon[1446]: [system] SELinux support is enabled Jul 2 00:19:15.743401 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:19:15.755248 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:19:15.755497 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:19:15.760310 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:19:15.760362 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:19:15.763619 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:19:15.763793 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 2 00:19:15.763842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:19:15.770370 jq[1461]: true Jul 2 00:19:15.775845 extend-filesystems[1450]: Found loop4 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found loop5 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found loop6 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found loop7 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda1 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda2 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda3 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found usr Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda4 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda6 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda7 Jul 2 00:19:15.775845 extend-filesystems[1450]: Found vda9 Jul 2 00:19:15.775845 extend-filesystems[1450]: Checking size of /dev/vda9 Jul 2 00:19:15.823530 update_engine[1457]: I0702 00:19:15.819237 1457 main.cc:92] Flatcar Update Engine starting Jul 2 00:19:15.832426 coreos-metadata[1445]: Jul 02 00:19:15.812 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:15.832426 coreos-metadata[1445]: Jul 02 00:19:15.827 INFO Fetch successful Jul 2 00:19:15.792197 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:19:15.846791 tar[1472]: linux-amd64/helm Jul 2 00:19:15.846464 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:19:15.861160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1367) Jul 2 00:19:15.861247 update_engine[1457]: I0702 00:19:15.849565 1457 update_check_scheduler.cc:74] Next update check in 9m2s Jul 2 00:19:15.861333 extend-filesystems[1450]: Resized partition /dev/vda9 Jul 2 00:19:15.862107 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:19:15.872959 extend-filesystems[1486]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:19:15.885797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 00:19:15.885912 jq[1482]: true Jul 2 00:19:15.929111 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:19:15.934178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:19:15.982104 systemd-logind[1456]: New seat seat0. Jul 2 00:19:15.984008 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:19:15.985837 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:19:16.005582 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:19:16.005605 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:19:16.014073 systemd[1]: Starting sshkeys.service... Jul 2 00:19:16.016027 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:19:16.039989 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 00:19:16.055881 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:19:16.072338 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:19:16.080856 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:19:16.080856 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 00:19:16.080856 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 00:19:16.083255 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:19:16.093829 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Jul 2 00:19:16.093829 extend-filesystems[1450]: Found vdb Jul 2 00:19:16.089200 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:19:16.090885 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:19:16.138273 coreos-metadata[1515]: Jul 02 00:19:16.137 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:16.150219 coreos-metadata[1515]: Jul 02 00:19:16.149 INFO Fetch successful Jul 2 00:19:16.165893 unknown[1515]: wrote ssh authorized keys file for user: core Jul 2 00:19:16.199793 update-ssh-keys[1525]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:19:16.202523 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:19:16.208181 systemd[1]: Finished sshkeys.service. Jul 2 00:19:16.353166 containerd[1475]: time="2024-07-02T00:19:16.353021801Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:19:16.421822 containerd[1475]: time="2024-07-02T00:19:16.421186418Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:19:16.421822 containerd[1475]: time="2024-07-02T00:19:16.421236602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428116927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428181324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428497417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428521989Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428613859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428663273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428674738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428737994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.428990207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.429009921Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:19:16.430432 containerd[1475]: time="2024-07-02T00:19:16.429019415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430755 containerd[1475]: time="2024-07-02T00:19:16.429153013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:16.430755 containerd[1475]: time="2024-07-02T00:19:16.429168440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:19:16.430755 containerd[1475]: time="2024-07-02T00:19:16.429225339Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:19:16.430755 containerd[1475]: time="2024-07-02T00:19:16.429235216Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:19:16.438621 containerd[1475]: time="2024-07-02T00:19:16.438574867Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:19:16.438832 containerd[1475]: time="2024-07-02T00:19:16.438811814Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.439845894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.439896267Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.439913177Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.439925683Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.439937187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440094863Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440112170Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440127139Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440140765Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440155070Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440172004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440185310Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440213923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.440833 containerd[1475]: time="2024-07-02T00:19:16.440234202Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440248048Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440260565Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440274306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440380181Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440652247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440680464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440695826Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440719985Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440774144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.441168 containerd[1475]: time="2024-07-02T00:19:16.440786068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442360 containerd[1475]: time="2024-07-02T00:19:16.440796947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442477 containerd[1475]: time="2024-07-02T00:19:16.442461545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442546 containerd[1475]: time="2024-07-02T00:19:16.442535026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442630 containerd[1475]: time="2024-07-02T00:19:16.442587292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442792 containerd[1475]: time="2024-07-02T00:19:16.442778232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.442874 containerd[1475]: time="2024-07-02T00:19:16.442863837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.443052 containerd[1475]: time="2024-07-02T00:19:16.443029130Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:19:16.443754 containerd[1475]: time="2024-07-02T00:19:16.443640234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.443882 containerd[1475]: time="2024-07-02T00:19:16.443866940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.443954 containerd[1475]: time="2024-07-02T00:19:16.443941393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.444032 containerd[1475]: time="2024-07-02T00:19:16.444019468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.444427 containerd[1475]: time="2024-07-02T00:19:16.444409627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.444521 containerd[1475]: time="2024-07-02T00:19:16.444505654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.444589 containerd[1475]: time="2024-07-02T00:19:16.444578970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.444669 containerd[1475]: time="2024-07-02T00:19:16.444653104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:19:16.445506 containerd[1475]: time="2024-07-02T00:19:16.445380616Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:19:16.446337 containerd[1475]: time="2024-07-02T00:19:16.445853713Z" level=info msg="Connect containerd service" Jul 2 00:19:16.446337 containerd[1475]: time="2024-07-02T00:19:16.446130037Z" level=info msg="using legacy CRI server" Jul 2 00:19:16.446337 containerd[1475]: time="2024-07-02T00:19:16.446140299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:19:16.446337 containerd[1475]: time="2024-07-02T00:19:16.446249245Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:19:16.448819 containerd[1475]: time="2024-07-02T00:19:16.448738503Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:19:16.448986 containerd[1475]: time="2024-07-02T00:19:16.448796211Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:19:16.448986 containerd[1475]: time="2024-07-02T00:19:16.448922854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:19:16.448986 containerd[1475]: time="2024-07-02T00:19:16.448935903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:19:16.448986 containerd[1475]: time="2024-07-02T00:19:16.448949303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:19:16.449893 containerd[1475]: time="2024-07-02T00:19:16.448846583Z" level=info msg="Start subscribing containerd event" Jul 2 00:19:16.449893 containerd[1475]: time="2024-07-02T00:19:16.449886163Z" level=info msg="Start recovering state" Jul 2 00:19:16.450055 containerd[1475]: time="2024-07-02T00:19:16.449962948Z" level=info msg="Start event monitor" Jul 2 00:19:16.450055 containerd[1475]: time="2024-07-02T00:19:16.449983872Z" level=info msg="Start snapshots syncer" Jul 2 00:19:16.450055 containerd[1475]: time="2024-07-02T00:19:16.449992577Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:19:16.450055 containerd[1475]: time="2024-07-02T00:19:16.449999999Z" level=info msg="Start streaming server" Jul 2 00:19:16.450417 containerd[1475]: time="2024-07-02T00:19:16.450399240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:19:16.450974 containerd[1475]: time="2024-07-02T00:19:16.450643263Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:19:16.451176 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:19:16.460891 containerd[1475]: time="2024-07-02T00:19:16.459921177Z" level=info msg="containerd successfully booted in 0.108009s" Jul 2 00:19:16.461921 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:19:16.492182 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:19:16.502542 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:19:16.519878 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:19:16.520822 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:19:16.531872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:19:16.548817 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:19:16.560313 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:19:16.568345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:19:16.570860 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:19:16.691315 tar[1472]: linux-amd64/LICENSE Jul 2 00:19:16.691315 tar[1472]: linux-amd64/README.md Jul 2 00:19:16.708529 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:19:16.830042 systemd-networkd[1368]: eth1: Gained IPv6LL Jul 2 00:19:16.830493 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:16.834688 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:19:16.836533 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:19:16.843040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:16.846119 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:19:16.871658 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:19:17.150059 systemd-networkd[1368]: eth0: Gained IPv6LL Jul 2 00:19:17.150684 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:17.180468 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:19:17.190253 systemd[1]: Started sshd@0-146.190.126.73:22-147.75.109.163:52952.service - OpenSSH per-connection server daemon (147.75.109.163:52952). Jul 2 00:19:17.269340 sshd[1564]: Accepted publickey for core from 147.75.109.163 port 52952 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:17.272028 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.282746 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:19:17.292234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:19:17.300424 systemd-logind[1456]: New session 1 of user core. Jul 2 00:19:17.317356 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:19:17.330347 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:19:17.348839 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.525012 systemd[1568]: Queued start job for default target default.target. Jul 2 00:19:17.532680 systemd[1568]: Created slice app.slice - User Application Slice. Jul 2 00:19:17.532728 systemd[1568]: Reached target paths.target - Paths. Jul 2 00:19:17.532775 systemd[1568]: Reached target timers.target - Timers. Jul 2 00:19:17.537258 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:19:17.563760 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:19:17.563956 systemd[1568]: Reached target sockets.target - Sockets. Jul 2 00:19:17.563979 systemd[1568]: Reached target basic.target - Basic System. Jul 2 00:19:17.564037 systemd[1568]: Reached target default.target - Main User Target. Jul 2 00:19:17.564081 systemd[1568]: Startup finished in 203ms. Jul 2 00:19:17.564659 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:19:17.569995 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:19:17.647815 systemd[1]: Started sshd@1-146.190.126.73:22-147.75.109.163:52958.service - OpenSSH per-connection server daemon (147.75.109.163:52958). Jul 2 00:19:17.721367 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 52958 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:17.721425 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.731245 systemd-logind[1456]: New session 2 of user core. Jul 2 00:19:17.735003 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:19:17.812246 sshd[1579]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:17.820415 systemd[1]: sshd@1-146.190.126.73:22-147.75.109.163:52958.service: Deactivated successfully. Jul 2 00:19:17.823385 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:19:17.825415 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:19:17.834968 systemd[1]: Started sshd@2-146.190.126.73:22-147.75.109.163:52974.service - OpenSSH per-connection server daemon (147.75.109.163:52974). Jul 2 00:19:17.839033 systemd-logind[1456]: Removed session 2. Jul 2 00:19:17.845155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:17.848654 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:19:17.850264 systemd[1]: Startup finished in 954ms (kernel) + 5.701s (initrd) + 5.499s (userspace) = 12.155s. Jul 2 00:19:17.858327 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:17.899296 sshd[1590]: Accepted publickey for core from 147.75.109.163 port 52974 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:17.901998 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.909021 systemd-logind[1456]: New session 3 of user core. Jul 2 00:19:17.915041 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:19:17.981446 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:17.986648 systemd[1]: sshd@2-146.190.126.73:22-147.75.109.163:52974.service: Deactivated successfully. Jul 2 00:19:17.989107 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:19:17.990658 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:19:17.994543 systemd-logind[1456]: Removed session 3. Jul 2 00:19:18.639033 kubelet[1592]: E0702 00:19:18.638915 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:18.642570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:18.642777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:18.643387 systemd[1]: kubelet.service: Consumed 1.242s CPU time. Jul 2 00:19:27.994462 systemd[1]: Started sshd@3-146.190.126.73:22-147.75.109.163:40976.service - OpenSSH per-connection server daemon (147.75.109.163:40976). Jul 2 00:19:28.044154 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 40976 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.045854 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.050170 systemd-logind[1456]: New session 4 of user core. Jul 2 00:19:28.060073 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:19:28.121919 sshd[1610]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.134738 systemd[1]: sshd@3-146.190.126.73:22-147.75.109.163:40976.service: Deactivated successfully. Jul 2 00:19:28.136749 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:19:28.138937 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:19:28.146119 systemd[1]: Started sshd@4-146.190.126.73:22-147.75.109.163:40988.service - OpenSSH per-connection server daemon (147.75.109.163:40988). Jul 2 00:19:28.146944 systemd-logind[1456]: Removed session 4. Jul 2 00:19:28.182635 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 40988 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.184201 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.189025 systemd-logind[1456]: New session 5 of user core. Jul 2 00:19:28.202079 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:19:28.256536 sshd[1617]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.268529 systemd[1]: sshd@4-146.190.126.73:22-147.75.109.163:40988.service: Deactivated successfully. Jul 2 00:19:28.270414 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:19:28.273012 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:19:28.278149 systemd[1]: Started sshd@5-146.190.126.73:22-147.75.109.163:40994.service - OpenSSH per-connection server daemon (147.75.109.163:40994). Jul 2 00:19:28.279429 systemd-logind[1456]: Removed session 5. Jul 2 00:19:28.317462 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 40994 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.318951 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.323413 systemd-logind[1456]: New session 6 of user core. Jul 2 00:19:28.327993 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:19:28.388225 sshd[1624]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.406664 systemd[1]: sshd@5-146.190.126.73:22-147.75.109.163:40994.service: Deactivated successfully. Jul 2 00:19:28.409194 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:19:28.410972 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:19:28.416295 systemd[1]: Started sshd@6-146.190.126.73:22-147.75.109.163:41010.service - OpenSSH per-connection server daemon (147.75.109.163:41010). Jul 2 00:19:28.418356 systemd-logind[1456]: Removed session 6. Jul 2 00:19:28.460503 sshd[1631]: Accepted publickey for core from 147.75.109.163 port 41010 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.462106 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.466597 systemd-logind[1456]: New session 7 of user core. Jul 2 00:19:28.476076 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:19:28.542098 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:19:28.542407 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:28.557769 sudo[1634]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:28.561297 sshd[1631]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.575539 systemd[1]: sshd@6-146.190.126.73:22-147.75.109.163:41010.service: Deactivated successfully. Jul 2 00:19:28.577427 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:19:28.579996 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:19:28.585193 systemd[1]: Started sshd@7-146.190.126.73:22-147.75.109.163:41018.service - OpenSSH per-connection server daemon (147.75.109.163:41018). Jul 2 00:19:28.586975 systemd-logind[1456]: Removed session 7. Jul 2 00:19:28.622467 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 41018 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.624140 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.628997 systemd-logind[1456]: New session 8 of user core. Jul 2 00:19:28.635082 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:19:28.692480 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:19:28.692768 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:28.693678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:19:28.698014 sudo[1643]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:28.702085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:28.705534 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:19:28.706159 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:28.722207 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:28.738644 auditctl[1649]: No rules Jul 2 00:19:28.740394 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:19:28.740659 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:28.755150 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:28.783318 augenrules[1667]: No rules Jul 2 00:19:28.785401 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:28.786683 sudo[1642]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:28.791121 sshd[1639]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:28.809993 systemd[1]: sshd@7-146.190.126.73:22-147.75.109.163:41018.service: Deactivated successfully. Jul 2 00:19:28.813729 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:19:28.816906 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:19:28.824195 systemd[1]: Started sshd@8-146.190.126.73:22-147.75.109.163:41026.service - OpenSSH per-connection server daemon (147.75.109.163:41026). Jul 2 00:19:28.831013 systemd-logind[1456]: Removed session 8. Jul 2 00:19:28.867058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:28.870597 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 41026 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:28.872250 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:28.875157 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:28.879875 systemd-logind[1456]: New session 9 of user core. Jul 2 00:19:28.884001 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:19:28.931083 kubelet[1682]: E0702 00:19:28.931027 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:28.935447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:28.935758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:28.943520 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:19:28.943902 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:29.092252 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:19:29.092490 (dockerd)[1701]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:19:29.468250 dockerd[1701]: time="2024-07-02T00:19:29.467820813Z" level=info msg="Starting up" Jul 2 00:19:29.514920 dockerd[1701]: time="2024-07-02T00:19:29.514677620Z" level=info msg="Loading containers: start." Jul 2 00:19:29.624879 kernel: Initializing XFRM netlink socket Jul 2 00:19:29.652348 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jul 2 00:19:29.708050 systemd-networkd[1368]: docker0: Link UP Jul 2 00:19:29.721999 dockerd[1701]: time="2024-07-02T00:19:29.721765119Z" level=info msg="Loading containers: done." Jul 2 00:19:29.805484 dockerd[1701]: time="2024-07-02T00:19:29.805281275Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:19:29.805655 dockerd[1701]: time="2024-07-02T00:19:29.805530005Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:19:29.805655 dockerd[1701]: time="2024-07-02T00:19:29.805645977Z" level=info msg="Daemon has completed initialization" Jul 2 00:19:29.808994 systemd-timesyncd[1347]: Contacted time server 104.131.155.175:123 (2.flatcar.pool.ntp.org). Jul 2 00:19:29.809058 systemd-timesyncd[1347]: Initial clock synchronization to Tue 2024-07-02 00:19:30.100533 UTC. Jul 2 00:19:29.840378 dockerd[1701]: time="2024-07-02T00:19:29.840309210Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:19:29.841410 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:19:30.601946 containerd[1475]: time="2024-07-02T00:19:30.601886940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:19:31.267566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727307647.mount: Deactivated successfully. Jul 2 00:19:32.512582 containerd[1475]: time="2024-07-02T00:19:32.512218284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:32.513356 containerd[1475]: time="2024-07-02T00:19:32.513299637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:19:32.514016 containerd[1475]: time="2024-07-02T00:19:32.513473715Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:32.516459 containerd[1475]: time="2024-07-02T00:19:32.516406042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:32.517884 containerd[1475]: time="2024-07-02T00:19:32.517490699Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 1.915553157s" Jul 2 00:19:32.517884 containerd[1475]: time="2024-07-02T00:19:32.517528446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:19:32.541558 containerd[1475]: time="2024-07-02T00:19:32.541513761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:19:34.110117 containerd[1475]: time="2024-07-02T00:19:34.109599656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:34.110117 containerd[1475]: time="2024-07-02T00:19:34.109868545Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:19:34.111294 containerd[1475]: time="2024-07-02T00:19:34.111259279Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:34.114865 containerd[1475]: time="2024-07-02T00:19:34.114823936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:34.115974 containerd[1475]: time="2024-07-02T00:19:34.115939798Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 1.574384531s" Jul 2 00:19:34.116104 containerd[1475]: time="2024-07-02T00:19:34.116088330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:19:34.146472 containerd[1475]: time="2024-07-02T00:19:34.146424517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:19:35.265086 containerd[1475]: time="2024-07-02T00:19:35.264984025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:35.266114 containerd[1475]: time="2024-07-02T00:19:35.266057146Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:19:35.266711 containerd[1475]: time="2024-07-02T00:19:35.266653930Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:35.270198 containerd[1475]: time="2024-07-02T00:19:35.270133004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:35.272304 containerd[1475]: time="2024-07-02T00:19:35.271772569Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.12512189s" Jul 2 00:19:35.272304 containerd[1475]: time="2024-07-02T00:19:35.271864032Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:19:35.301571 containerd[1475]: time="2024-07-02T00:19:35.301531556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:19:36.419714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146093988.mount: Deactivated successfully. Jul 2 00:19:36.872713 containerd[1475]: time="2024-07-02T00:19:36.872647625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:36.873487 containerd[1475]: time="2024-07-02T00:19:36.873445015Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:19:36.874197 containerd[1475]: time="2024-07-02T00:19:36.873935647Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:36.875671 containerd[1475]: time="2024-07-02T00:19:36.875641220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:36.876549 containerd[1475]: time="2024-07-02T00:19:36.876519954Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 1.574770054s" Jul 2 00:19:36.876605 containerd[1475]: time="2024-07-02T00:19:36.876551385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:19:36.901068 containerd[1475]: time="2024-07-02T00:19:36.901028924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:19:37.522189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679492733.mount: Deactivated successfully. Jul 2 00:19:38.236911 containerd[1475]: time="2024-07-02T00:19:38.236856852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.238026 containerd[1475]: time="2024-07-02T00:19:38.237919508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:19:38.238026 containerd[1475]: time="2024-07-02T00:19:38.237967672Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.241006 containerd[1475]: time="2024-07-02T00:19:38.240930287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.242540 containerd[1475]: time="2024-07-02T00:19:38.242125307Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.340919453s" Jul 2 00:19:38.242540 containerd[1475]: time="2024-07-02T00:19:38.242163311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:19:38.272851 containerd[1475]: time="2024-07-02T00:19:38.272752791Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:19:38.857042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590120417.mount: Deactivated successfully. Jul 2 00:19:38.860512 containerd[1475]: time="2024-07-02T00:19:38.860457607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.861734 containerd[1475]: time="2024-07-02T00:19:38.861688806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:19:38.862321 containerd[1475]: time="2024-07-02T00:19:38.862289147Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.864109 containerd[1475]: time="2024-07-02T00:19:38.864057342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.864967 containerd[1475]: time="2024-07-02T00:19:38.864793664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 592.000167ms" Jul 2 00:19:38.864967 containerd[1475]: time="2024-07-02T00:19:38.864844857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:19:38.895324 containerd[1475]: time="2024-07-02T00:19:38.895284959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:19:39.011418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:19:39.023377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:39.139502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:39.149222 (kubelet)[1993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:39.201616 kubelet[1993]: E0702 00:19:39.201547 1993 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:39.204004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:39.204168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:39.448430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116906998.mount: Deactivated successfully. Jul 2 00:19:41.290665 containerd[1475]: time="2024-07-02T00:19:41.289744601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:41.290665 containerd[1475]: time="2024-07-02T00:19:41.290469558Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:19:41.291316 containerd[1475]: time="2024-07-02T00:19:41.291136295Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:41.295229 containerd[1475]: time="2024-07-02T00:19:41.295173264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:41.296556 containerd[1475]: time="2024-07-02T00:19:41.296516046Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.401190076s" Jul 2 00:19:41.296708 containerd[1475]: time="2024-07-02T00:19:41.296693388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:19:44.535555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:44.543176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:44.576930 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit session-9.scope)... Jul 2 00:19:44.576958 systemd[1]: Reloading... Jul 2 00:19:44.703848 zram_generator::config[2149]: No configuration found. Jul 2 00:19:44.849776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:44.934490 systemd[1]: Reloading finished in 356 ms. Jul 2 00:19:44.978063 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:19:44.978152 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:19:44.978622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:44.985281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:45.118577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:45.130428 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:19:45.188410 kubelet[2201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:45.188981 kubelet[2201]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:19:45.188981 kubelet[2201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:45.192837 kubelet[2201]: I0702 00:19:45.191975 2201 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:19:45.580724 kubelet[2201]: I0702 00:19:45.580676 2201 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:19:45.581046 kubelet[2201]: I0702 00:19:45.580909 2201 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:19:45.581896 kubelet[2201]: I0702 00:19:45.581756 2201 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:19:45.628088 kubelet[2201]: I0702 00:19:45.627890 2201 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:19:45.630375 kubelet[2201]: E0702 00:19:45.630335 2201 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.126.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.643741 kubelet[2201]: I0702 00:19:45.643704 2201 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:19:45.644346 kubelet[2201]: I0702 00:19:45.644309 2201 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:19:45.644699 kubelet[2201]: I0702 00:19:45.644436 2201 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-0-70f2b56eaa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:19:45.646112 kubelet[2201]: I0702 00:19:45.646066 2201 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:19:45.646112 kubelet[2201]: I0702 00:19:45.646111 2201 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:19:45.646300 kubelet[2201]: I0702 00:19:45.646281 2201 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:45.649505 kubelet[2201]: W0702 00:19:45.649446 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.126.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-0-70f2b56eaa&limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.649664 kubelet[2201]: E0702 00:19:45.649634 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.126.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-0-70f2b56eaa&limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.653060 kubelet[2201]: I0702 00:19:45.653004 2201 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:19:45.653060 kubelet[2201]: I0702 00:19:45.653070 2201 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:19:45.654098 kubelet[2201]: I0702 00:19:45.653125 2201 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:19:45.654098 kubelet[2201]: I0702 00:19:45.653157 2201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:19:45.660566 kubelet[2201]: W0702 00:19:45.660507 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.126.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.660842 kubelet[2201]: E0702 00:19:45.660780 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.126.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.661492 kubelet[2201]: I0702 00:19:45.661334 2201 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:19:45.664941 kubelet[2201]: I0702 00:19:45.663885 2201 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:19:45.664941 kubelet[2201]: W0702 00:19:45.664008 2201 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:19:45.664941 kubelet[2201]: I0702 00:19:45.664700 2201 server.go:1264] "Started kubelet" Jul 2 00:19:45.673552 kubelet[2201]: I0702 00:19:45.672742 2201 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:19:45.674921 kubelet[2201]: I0702 00:19:45.674316 2201 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:19:45.675403 kubelet[2201]: I0702 00:19:45.675352 2201 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:19:45.676145 kubelet[2201]: I0702 00:19:45.676123 2201 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:19:45.676829 kubelet[2201]: E0702 00:19:45.676651 2201 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.126.73:6443/api/v1/namespaces/default/events\": dial tcp 146.190.126.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-0-70f2b56eaa.17de3d5eeffd268f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-0-70f2b56eaa,UID:ci-3975.1.1-0-70f2b56eaa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-0-70f2b56eaa,},FirstTimestamp:2024-07-02 00:19:45.664673423 +0000 UTC m=+0.527406584,LastTimestamp:2024-07-02 00:19:45.664673423 +0000 UTC m=+0.527406584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-0-70f2b56eaa,}" Jul 2 00:19:45.680407 kubelet[2201]: I0702 00:19:45.679906 2201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:19:45.689229 kubelet[2201]: I0702 00:19:45.689198 2201 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:19:45.691469 kubelet[2201]: I0702 00:19:45.689725 2201 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:19:45.692382 kubelet[2201]: I0702 00:19:45.692364 2201 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:19:45.692477 kubelet[2201]: E0702 00:19:45.691982 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-0-70f2b56eaa?timeout=10s\": dial tcp 146.190.126.73:6443: connect: connection refused" interval="200ms" Jul 2 00:19:45.692525 kubelet[2201]: W0702 00:19:45.691905 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.126.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.692600 kubelet[2201]: E0702 00:19:45.692591 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.126.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.695845 kubelet[2201]: I0702 00:19:45.695782 2201 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:19:45.696107 kubelet[2201]: I0702 00:19:45.696088 2201 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:19:45.698006 kubelet[2201]: E0702 00:19:45.697986 2201 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:19:45.698454 kubelet[2201]: I0702 00:19:45.698383 2201 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:19:45.715619 kubelet[2201]: I0702 00:19:45.715556 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:19:45.718957 kubelet[2201]: I0702 00:19:45.718921 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:19:45.718957 kubelet[2201]: I0702 00:19:45.718954 2201 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:19:45.719088 kubelet[2201]: I0702 00:19:45.718977 2201 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:19:45.719088 kubelet[2201]: E0702 00:19:45.719025 2201 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:19:45.721082 kubelet[2201]: I0702 00:19:45.721059 2201 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:19:45.721082 kubelet[2201]: I0702 00:19:45.721079 2201 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:19:45.721228 kubelet[2201]: I0702 00:19:45.721098 2201 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:45.722726 kubelet[2201]: W0702 00:19:45.722590 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.126.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.722726 kubelet[2201]: E0702 00:19:45.722625 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.126.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:45.790979 kubelet[2201]: I0702 00:19:45.790920 2201 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:45.791419 kubelet[2201]: E0702 00:19:45.791391 2201 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.126.73:6443/api/v1/nodes\": dial tcp 146.190.126.73:6443: connect: connection refused" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:45.820068 kubelet[2201]: E0702 00:19:45.820014 2201 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:19:45.893108 kubelet[2201]: E0702 00:19:45.892952 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-0-70f2b56eaa?timeout=10s\": dial tcp 146.190.126.73:6443: connect: connection refused" interval="400ms" Jul 2 00:19:45.992989 kubelet[2201]: I0702 00:19:45.992952 2201 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:45.993962 kubelet[2201]: E0702 00:19:45.993914 2201 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.126.73:6443/api/v1/nodes\": dial tcp 146.190.126.73:6443: connect: connection refused" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.020329 kubelet[2201]: E0702 00:19:46.020269 2201 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:19:46.216118 kubelet[2201]: I0702 00:19:46.215952 2201 policy_none.go:49] "None policy: Start" Jul 2 00:19:46.217628 kubelet[2201]: I0702 00:19:46.217248 2201 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:19:46.217628 kubelet[2201]: I0702 00:19:46.217286 2201 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:19:46.225539 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:19:46.241028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:19:46.251962 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:19:46.255299 kubelet[2201]: I0702 00:19:46.254974 2201 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:19:46.255505 kubelet[2201]: I0702 00:19:46.255430 2201 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:19:46.255726 kubelet[2201]: I0702 00:19:46.255562 2201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:19:46.258578 kubelet[2201]: E0702 00:19:46.258489 2201 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-0-70f2b56eaa\" not found" Jul 2 00:19:46.293534 kubelet[2201]: E0702 00:19:46.293478 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-0-70f2b56eaa?timeout=10s\": dial tcp 146.190.126.73:6443: connect: connection refused" interval="800ms" Jul 2 00:19:46.395668 kubelet[2201]: I0702 00:19:46.395583 2201 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.396018 kubelet[2201]: E0702 00:19:46.395988 2201 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.126.73:6443/api/v1/nodes\": dial tcp 146.190.126.73:6443: connect: connection refused" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.420570 kubelet[2201]: I0702 00:19:46.420451 2201 topology_manager.go:215] "Topology Admit Handler" podUID="73d17ff61e0a9e793fd65c6d427cc748" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.421857 kubelet[2201]: I0702 00:19:46.421634 2201 topology_manager.go:215] "Topology Admit Handler" podUID="f26e9628802bc0fdad501219396d5db4" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.423871 kubelet[2201]: I0702 00:19:46.423321 2201 topology_manager.go:215] "Topology Admit Handler" podUID="a34d1ab572932e4b21919a813fc69b10" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.430990 systemd[1]: Created slice kubepods-burstable-pod73d17ff61e0a9e793fd65c6d427cc748.slice - libcontainer container kubepods-burstable-pod73d17ff61e0a9e793fd65c6d427cc748.slice. Jul 2 00:19:46.448050 systemd[1]: Created slice kubepods-burstable-podf26e9628802bc0fdad501219396d5db4.slice - libcontainer container kubepods-burstable-podf26e9628802bc0fdad501219396d5db4.slice. Jul 2 00:19:46.454622 systemd[1]: Created slice kubepods-burstable-poda34d1ab572932e4b21919a813fc69b10.slice - libcontainer container kubepods-burstable-poda34d1ab572932e4b21919a813fc69b10.slice. Jul 2 00:19:46.496868 kubelet[2201]: I0702 00:19:46.496655 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.496868 kubelet[2201]: I0702 00:19:46.496725 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.496868 kubelet[2201]: I0702 00:19:46.496762 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.496868 kubelet[2201]: I0702 00:19:46.496787 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.496868 kubelet[2201]: I0702 00:19:46.496843 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.497215 kubelet[2201]: I0702 00:19:46.496889 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.497215 kubelet[2201]: I0702 00:19:46.496917 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.497215 kubelet[2201]: I0702 00:19:46.496946 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.497215 kubelet[2201]: I0702 00:19:46.496979 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a34d1ab572932e4b21919a813fc69b10-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-0-70f2b56eaa\" (UID: \"a34d1ab572932e4b21919a813fc69b10\") " pod="kube-system/kube-scheduler-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:46.608838 kubelet[2201]: W0702 00:19:46.608723 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.126.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.608838 kubelet[2201]: E0702 00:19:46.608790 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.126.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.744710 kubelet[2201]: E0702 00:19:46.744655 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:46.745592 containerd[1475]: time="2024-07-02T00:19:46.745536035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-0-70f2b56eaa,Uid:73d17ff61e0a9e793fd65c6d427cc748,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:46.753003 kubelet[2201]: E0702 00:19:46.752855 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:46.753887 kubelet[2201]: W0702 00:19:46.753591 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.126.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.753887 kubelet[2201]: E0702 00:19:46.753656 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.126.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.754254 containerd[1475]: time="2024-07-02T00:19:46.753627786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-0-70f2b56eaa,Uid:f26e9628802bc0fdad501219396d5db4,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:46.758073 kubelet[2201]: E0702 00:19:46.758037 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:46.758758 containerd[1475]: time="2024-07-02T00:19:46.758552662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-0-70f2b56eaa,Uid:a34d1ab572932e4b21919a813fc69b10,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:46.950633 kubelet[2201]: W0702 00:19:46.950527 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.126.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-0-70f2b56eaa&limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.950633 kubelet[2201]: E0702 00:19:46.950616 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.126.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-0-70f2b56eaa&limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.977424 kubelet[2201]: W0702 00:19:46.977379 2201 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.126.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:46.977424 kubelet[2201]: E0702 00:19:46.977425 2201 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.126.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:47.094068 kubelet[2201]: E0702 00:19:47.094014 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-0-70f2b56eaa?timeout=10s\": dial tcp 146.190.126.73:6443: connect: connection refused" interval="1.6s" Jul 2 00:19:47.197109 kubelet[2201]: I0702 00:19:47.197068 2201 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:47.197506 kubelet[2201]: E0702 00:19:47.197433 2201 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.126.73:6443/api/v1/nodes\": dial tcp 146.190.126.73:6443: connect: connection refused" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:47.577098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471873771.mount: Deactivated successfully. Jul 2 00:19:47.581969 containerd[1475]: time="2024-07-02T00:19:47.581916656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:47.583523 containerd[1475]: time="2024-07-02T00:19:47.583465660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:19:47.585471 containerd[1475]: time="2024-07-02T00:19:47.585420417Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:47.587244 containerd[1475]: time="2024-07-02T00:19:47.586860293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:47.587244 containerd[1475]: time="2024-07-02T00:19:47.587052423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:19:47.588413 containerd[1475]: time="2024-07-02T00:19:47.588379626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:19:47.588559 containerd[1475]: time="2024-07-02T00:19:47.588542152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:47.591757 containerd[1475]: time="2024-07-02T00:19:47.591700761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:47.592827 containerd[1475]: time="2024-07-02T00:19:47.592766283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 844.289477ms" Jul 2 00:19:47.594416 containerd[1475]: time="2024-07-02T00:19:47.594332702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 835.672207ms" Jul 2 00:19:47.601386 containerd[1475]: time="2024-07-02T00:19:47.601038115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 847.298668ms" Jul 2 00:19:47.737416 containerd[1475]: time="2024-07-02T00:19:47.737093248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:47.737416 containerd[1475]: time="2024-07-02T00:19:47.737196114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.737416 containerd[1475]: time="2024-07-02T00:19:47.737230005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:47.737416 containerd[1475]: time="2024-07-02T00:19:47.737253873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.743312 containerd[1475]: time="2024-07-02T00:19:47.743042420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:47.743312 containerd[1475]: time="2024-07-02T00:19:47.743094033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.743312 containerd[1475]: time="2024-07-02T00:19:47.743108754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:47.743312 containerd[1475]: time="2024-07-02T00:19:47.743117959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.751447 containerd[1475]: time="2024-07-02T00:19:47.751321551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:47.754001 containerd[1475]: time="2024-07-02T00:19:47.751596912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.754001 containerd[1475]: time="2024-07-02T00:19:47.751620324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:47.754001 containerd[1475]: time="2024-07-02T00:19:47.751672493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:47.772106 systemd[1]: Started cri-containerd-08cb2e2f94e2a266e5be6c780054dafdcb102f7eafdf3bfae310be13ca8a43f9.scope - libcontainer container 08cb2e2f94e2a266e5be6c780054dafdcb102f7eafdf3bfae310be13ca8a43f9. Jul 2 00:19:47.791073 systemd[1]: Started cri-containerd-451f32d0cbbe6936ed062afc16c10c9c8b8f51d0dcc43759aefdc783dce48550.scope - libcontainer container 451f32d0cbbe6936ed062afc16c10c9c8b8f51d0dcc43759aefdc783dce48550. Jul 2 00:19:47.797730 systemd[1]: Started cri-containerd-f020543bc0c695bbd953931de6ed21151a6871c3978b5aca350265014d8be52c.scope - libcontainer container f020543bc0c695bbd953931de6ed21151a6871c3978b5aca350265014d8be52c. Jul 2 00:19:47.816360 kubelet[2201]: E0702 00:19:47.815674 2201 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.126.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.126.73:6443: connect: connection refused Jul 2 00:19:47.858674 containerd[1475]: time="2024-07-02T00:19:47.858522970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-0-70f2b56eaa,Uid:a34d1ab572932e4b21919a813fc69b10,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cb2e2f94e2a266e5be6c780054dafdcb102f7eafdf3bfae310be13ca8a43f9\"" Jul 2 00:19:47.860657 kubelet[2201]: E0702 00:19:47.860617 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:47.865253 containerd[1475]: time="2024-07-02T00:19:47.865138890Z" level=info msg="CreateContainer within sandbox \"08cb2e2f94e2a266e5be6c780054dafdcb102f7eafdf3bfae310be13ca8a43f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:19:47.884840 containerd[1475]: time="2024-07-02T00:19:47.883922806Z" level=info msg="CreateContainer within sandbox \"08cb2e2f94e2a266e5be6c780054dafdcb102f7eafdf3bfae310be13ca8a43f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d288fbee803ea383a806d340d5f18bbe3bb5b58d7af154aee33239ecb0fcbf22\"" Jul 2 00:19:47.885591 containerd[1475]: time="2024-07-02T00:19:47.885560755Z" level=info msg="StartContainer for \"d288fbee803ea383a806d340d5f18bbe3bb5b58d7af154aee33239ecb0fcbf22\"" Jul 2 00:19:47.897294 containerd[1475]: time="2024-07-02T00:19:47.897237084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-0-70f2b56eaa,Uid:73d17ff61e0a9e793fd65c6d427cc748,Namespace:kube-system,Attempt:0,} returns sandbox id \"451f32d0cbbe6936ed062afc16c10c9c8b8f51d0dcc43759aefdc783dce48550\"" Jul 2 00:19:47.898769 kubelet[2201]: E0702 00:19:47.898385 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:47.904725 containerd[1475]: time="2024-07-02T00:19:47.904255093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-0-70f2b56eaa,Uid:f26e9628802bc0fdad501219396d5db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f020543bc0c695bbd953931de6ed21151a6871c3978b5aca350265014d8be52c\"" Jul 2 00:19:47.904993 kubelet[2201]: E0702 00:19:47.904972 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:47.905326 containerd[1475]: time="2024-07-02T00:19:47.905167397Z" level=info msg="CreateContainer within sandbox \"451f32d0cbbe6936ed062afc16c10c9c8b8f51d0dcc43759aefdc783dce48550\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:19:47.910246 containerd[1475]: time="2024-07-02T00:19:47.910173501Z" level=info msg="CreateContainer within sandbox \"f020543bc0c695bbd953931de6ed21151a6871c3978b5aca350265014d8be52c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:19:47.918246 containerd[1475]: time="2024-07-02T00:19:47.918160860Z" level=info msg="CreateContainer within sandbox \"451f32d0cbbe6936ed062afc16c10c9c8b8f51d0dcc43759aefdc783dce48550\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62a16a14d38b94e629fa985605ac3ccb906002cb4089fbd6c31edd95d7013d09\"" Jul 2 00:19:47.918737 containerd[1475]: time="2024-07-02T00:19:47.918665626Z" level=info msg="StartContainer for \"62a16a14d38b94e629fa985605ac3ccb906002cb4089fbd6c31edd95d7013d09\"" Jul 2 00:19:47.935062 systemd[1]: Started cri-containerd-d288fbee803ea383a806d340d5f18bbe3bb5b58d7af154aee33239ecb0fcbf22.scope - libcontainer container d288fbee803ea383a806d340d5f18bbe3bb5b58d7af154aee33239ecb0fcbf22. Jul 2 00:19:47.952439 containerd[1475]: time="2024-07-02T00:19:47.951635334Z" level=info msg="CreateContainer within sandbox \"f020543bc0c695bbd953931de6ed21151a6871c3978b5aca350265014d8be52c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1007cae5a1332fe0dc6ed5493b09d750a5d1b16578240672251cf266f35bc821\"" Jul 2 00:19:47.952439 containerd[1475]: time="2024-07-02T00:19:47.952411854Z" level=info msg="StartContainer for \"1007cae5a1332fe0dc6ed5493b09d750a5d1b16578240672251cf266f35bc821\"" Jul 2 00:19:47.993081 systemd[1]: Started cri-containerd-62a16a14d38b94e629fa985605ac3ccb906002cb4089fbd6c31edd95d7013d09.scope - libcontainer container 62a16a14d38b94e629fa985605ac3ccb906002cb4089fbd6c31edd95d7013d09. Jul 2 00:19:47.996352 systemd[1]: Started cri-containerd-1007cae5a1332fe0dc6ed5493b09d750a5d1b16578240672251cf266f35bc821.scope - libcontainer container 1007cae5a1332fe0dc6ed5493b09d750a5d1b16578240672251cf266f35bc821. Jul 2 00:19:48.018047 containerd[1475]: time="2024-07-02T00:19:48.017243022Z" level=info msg="StartContainer for \"d288fbee803ea383a806d340d5f18bbe3bb5b58d7af154aee33239ecb0fcbf22\" returns successfully" Jul 2 00:19:48.066727 containerd[1475]: time="2024-07-02T00:19:48.066620145Z" level=info msg="StartContainer for \"62a16a14d38b94e629fa985605ac3ccb906002cb4089fbd6c31edd95d7013d09\" returns successfully" Jul 2 00:19:48.089577 containerd[1475]: time="2024-07-02T00:19:48.089409956Z" level=info msg="StartContainer for \"1007cae5a1332fe0dc6ed5493b09d750a5d1b16578240672251cf266f35bc821\" returns successfully" Jul 2 00:19:48.738117 kubelet[2201]: E0702 00:19:48.738083 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:48.743081 kubelet[2201]: E0702 00:19:48.743053 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:48.743946 kubelet[2201]: E0702 00:19:48.743757 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:48.799435 kubelet[2201]: I0702 00:19:48.799374 2201 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:49.745782 kubelet[2201]: E0702 00:19:49.745753 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:50.729908 kubelet[2201]: I0702 00:19:50.729839 2201 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:50.774600 kubelet[2201]: E0702 00:19:50.774463 2201 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.1.1-0-70f2b56eaa.17de3d5eeffd268f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-0-70f2b56eaa,UID:ci-3975.1.1-0-70f2b56eaa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-0-70f2b56eaa,},FirstTimestamp:2024-07-02 00:19:45.664673423 +0000 UTC m=+0.527406584,LastTimestamp:2024-07-02 00:19:45.664673423 +0000 UTC m=+0.527406584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-0-70f2b56eaa,}" Jul 2 00:19:50.794948 kubelet[2201]: E0702 00:19:50.794904 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 00:19:51.512958 kubelet[2201]: E0702 00:19:51.512907 2201 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:51.513635 kubelet[2201]: E0702 00:19:51.513293 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:51.663948 kubelet[2201]: I0702 00:19:51.663736 2201 apiserver.go:52] "Watching apiserver" Jul 2 00:19:51.692928 kubelet[2201]: I0702 00:19:51.692872 2201 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:19:53.064825 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Jul 2 00:19:53.064843 systemd[1]: Reloading... Jul 2 00:19:53.154940 zram_generator::config[2508]: No configuration found. Jul 2 00:19:53.307521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:53.383912 kubelet[2201]: W0702 00:19:53.382235 2201 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:19:53.383912 kubelet[2201]: E0702 00:19:53.382783 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:53.413956 systemd[1]: Reloading finished in 348 ms. Jul 2 00:19:53.464006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:53.478431 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:19:53.478688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:53.485632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:53.615717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:53.629979 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:19:53.698130 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:53.698130 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:19:53.698130 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:53.700525 kubelet[2562]: I0702 00:19:53.700444 2562 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:19:53.707059 kubelet[2562]: I0702 00:19:53.707020 2562 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:19:53.707059 kubelet[2562]: I0702 00:19:53.707049 2562 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:19:53.707372 kubelet[2562]: I0702 00:19:53.707355 2562 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:19:53.708935 kubelet[2562]: I0702 00:19:53.708793 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:19:53.713328 kubelet[2562]: I0702 00:19:53.713028 2562 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:19:53.720748 kubelet[2562]: I0702 00:19:53.720711 2562 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:19:53.721017 kubelet[2562]: I0702 00:19:53.720967 2562 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:19:53.721194 kubelet[2562]: I0702 00:19:53.721003 2562 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-0-70f2b56eaa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:19:53.721277 kubelet[2562]: I0702 00:19:53.721206 2562 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:19:53.721277 kubelet[2562]: I0702 00:19:53.721216 2562 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:19:53.721277 kubelet[2562]: I0702 00:19:53.721259 2562 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:53.721385 kubelet[2562]: I0702 00:19:53.721374 2562 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:19:53.721414 kubelet[2562]: I0702 00:19:53.721391 2562 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:19:53.721830 kubelet[2562]: I0702 00:19:53.721794 2562 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:19:53.724161 kubelet[2562]: I0702 00:19:53.724131 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:19:53.728854 kubelet[2562]: I0702 00:19:53.726575 2562 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:19:53.728854 kubelet[2562]: I0702 00:19:53.726748 2562 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:19:53.728854 kubelet[2562]: I0702 00:19:53.727148 2562 server.go:1264] "Started kubelet" Jul 2 00:19:53.728854 kubelet[2562]: I0702 00:19:53.727607 2562 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:19:53.728854 kubelet[2562]: I0702 00:19:53.728562 2562 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:19:53.730216 kubelet[2562]: I0702 00:19:53.730144 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:19:53.730506 kubelet[2562]: I0702 00:19:53.730492 2562 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:19:53.741136 kubelet[2562]: I0702 00:19:53.741107 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:19:53.741528 kubelet[2562]: I0702 00:19:53.741508 2562 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:19:53.744575 kubelet[2562]: I0702 00:19:53.744342 2562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:19:53.745002 kubelet[2562]: I0702 00:19:53.744987 2562 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:19:53.749747 kubelet[2562]: E0702 00:19:53.749713 2562 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:19:53.757277 kubelet[2562]: I0702 00:19:53.757250 2562 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:19:53.758922 kubelet[2562]: I0702 00:19:53.758305 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:19:53.762634 kubelet[2562]: I0702 00:19:53.762609 2562 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:19:53.778740 kubelet[2562]: I0702 00:19:53.778630 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:19:53.779736 kubelet[2562]: I0702 00:19:53.779711 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:19:53.779827 kubelet[2562]: I0702 00:19:53.779743 2562 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:19:53.779827 kubelet[2562]: I0702 00:19:53.779764 2562 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:19:53.780131 kubelet[2562]: E0702 00:19:53.779918 2562 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:19:53.814816 kubelet[2562]: I0702 00:19:53.814774 2562 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:19:53.815460 kubelet[2562]: I0702 00:19:53.815153 2562 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:19:53.815460 kubelet[2562]: I0702 00:19:53.815177 2562 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:53.815460 kubelet[2562]: I0702 00:19:53.815343 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:19:53.815460 kubelet[2562]: I0702 00:19:53.815354 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:19:53.815460 kubelet[2562]: I0702 00:19:53.815373 2562 policy_none.go:49] "None policy: Start" Jul 2 00:19:53.816885 kubelet[2562]: I0702 00:19:53.816859 2562 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:19:53.817075 kubelet[2562]: I0702 00:19:53.816973 2562 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:19:53.817288 kubelet[2562]: I0702 00:19:53.817269 2562 state_mem.go:75] "Updated machine memory state" Jul 2 00:19:53.821477 kubelet[2562]: I0702 00:19:53.821371 2562 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:19:53.821594 kubelet[2562]: I0702 00:19:53.821537 2562 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:19:53.821646 kubelet[2562]: I0702 00:19:53.821635 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:19:53.850485 kubelet[2562]: I0702 00:19:53.850447 2562 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.859614 kubelet[2562]: I0702 00:19:53.859381 2562 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.859614 kubelet[2562]: I0702 00:19:53.859520 2562 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.881896 kubelet[2562]: I0702 00:19:53.880962 2562 topology_manager.go:215] "Topology Admit Handler" podUID="73d17ff61e0a9e793fd65c6d427cc748" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.881896 kubelet[2562]: I0702 00:19:53.881136 2562 topology_manager.go:215] "Topology Admit Handler" podUID="f26e9628802bc0fdad501219396d5db4" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.881896 kubelet[2562]: I0702 00:19:53.881204 2562 topology_manager.go:215] "Topology Admit Handler" podUID="a34d1ab572932e4b21919a813fc69b10" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:53.886945 kubelet[2562]: W0702 00:19:53.886836 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:19:53.889516 kubelet[2562]: W0702 00:19:53.889165 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:19:53.889811 kubelet[2562]: W0702 00:19:53.889774 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:19:53.890106 kubelet[2562]: E0702 00:19:53.889974 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047161 kubelet[2562]: I0702 00:19:54.045952 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047161 kubelet[2562]: I0702 00:19:54.046032 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047161 kubelet[2562]: I0702 00:19:54.046054 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047161 kubelet[2562]: I0702 00:19:54.046091 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047161 kubelet[2562]: I0702 00:19:54.046110 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047423 kubelet[2562]: I0702 00:19:54.046177 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047423 kubelet[2562]: I0702 00:19:54.046194 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f26e9628802bc0fdad501219396d5db4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-0-70f2b56eaa\" (UID: \"f26e9628802bc0fdad501219396d5db4\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047423 kubelet[2562]: I0702 00:19:54.046211 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a34d1ab572932e4b21919a813fc69b10-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-0-70f2b56eaa\" (UID: \"a34d1ab572932e4b21919a813fc69b10\") " pod="kube-system/kube-scheduler-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.047423 kubelet[2562]: I0702 00:19:54.046249 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73d17ff61e0a9e793fd65c6d427cc748-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" (UID: \"73d17ff61e0a9e793fd65c6d427cc748\") " pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.068626 sudo[2594]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:19:54.069008 sudo[2594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:19:54.188838 kubelet[2562]: E0702 00:19:54.188487 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.191343 kubelet[2562]: E0702 00:19:54.191232 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.191737 kubelet[2562]: E0702 00:19:54.191720 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.700186 sudo[2594]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:54.724835 kubelet[2562]: I0702 00:19:54.724563 2562 apiserver.go:52] "Watching apiserver" Jul 2 00:19:54.746320 kubelet[2562]: I0702 00:19:54.746265 2562 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:19:54.805308 kubelet[2562]: E0702 00:19:54.805083 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.807748 kubelet[2562]: E0702 00:19:54.805867 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.818724 kubelet[2562]: W0702 00:19:54.818537 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:19:54.821048 kubelet[2562]: E0702 00:19:54.819046 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-0-70f2b56eaa\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" Jul 2 00:19:54.821048 kubelet[2562]: E0702 00:19:54.819509 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:54.847339 kubelet[2562]: I0702 00:19:54.847250 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-0-70f2b56eaa" podStartSLOduration=1.8472331880000001 podStartE2EDuration="1.847233188s" podCreationTimestamp="2024-07-02 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:54.838345613 +0000 UTC m=+1.203672868" watchObservedRunningTime="2024-07-02 00:19:54.847233188 +0000 UTC m=+1.212560443" Jul 2 00:19:54.859460 kubelet[2562]: I0702 00:19:54.858881 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-0-70f2b56eaa" podStartSLOduration=1.858854526 podStartE2EDuration="1.858854526s" podCreationTimestamp="2024-07-02 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:54.847719998 +0000 UTC m=+1.213047254" watchObservedRunningTime="2024-07-02 00:19:54.858854526 +0000 UTC m=+1.224181784" Jul 2 00:19:55.807842 kubelet[2562]: E0702 00:19:55.807338 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:56.397998 sudo[1691]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:56.401872 sshd[1675]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:56.407153 systemd[1]: sshd@8-146.190.126.73:22-147.75.109.163:41026.service: Deactivated successfully. Jul 2 00:19:56.410221 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:19:56.410638 systemd[1]: session-9.scope: Consumed 5.678s CPU time, 137.1M memory peak, 0B memory swap peak. Jul 2 00:19:56.411705 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:19:56.412982 systemd-logind[1456]: Removed session 9. Jul 2 00:19:56.808919 kubelet[2562]: E0702 00:19:56.808883 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:19:57.170015 kubelet[2562]: E0702 00:19:57.169868 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:00.698713 update_engine[1457]: I0702 00:20:00.698614 1457 update_attempter.cc:509] Updating boot flags... Jul 2 00:20:00.732859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2637) Jul 2 00:20:00.784379 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2635) Jul 2 00:20:00.860161 kubelet[2562]: E0702 00:20:00.857662 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:00.892938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2635) Jul 2 00:20:00.895642 kubelet[2562]: I0702 00:20:00.895388 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-0-70f2b56eaa" podStartSLOduration=7.895349475 podStartE2EDuration="7.895349475s" podCreationTimestamp="2024-07-02 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:54.860243789 +0000 UTC m=+1.225571034" watchObservedRunningTime="2024-07-02 00:20:00.895349475 +0000 UTC m=+7.260676722" Jul 2 00:20:01.820828 kubelet[2562]: E0702 00:20:01.820766 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:05.133942 kubelet[2562]: E0702 00:20:05.133825 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:06.595224 kubelet[2562]: I0702 00:20:06.594966 2562 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:20:06.595777 containerd[1475]: time="2024-07-02T00:20:06.595635601Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:20:06.597012 kubelet[2562]: I0702 00:20:06.596167 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:20:07.176852 kubelet[2562]: E0702 00:20:07.176449 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:07.326312 kubelet[2562]: I0702 00:20:07.326265 2562 topology_manager.go:215] "Topology Admit Handler" podUID="887d380c-fe75-4e75-8ad1-51de1cf5e760" podNamespace="kube-system" podName="kube-proxy-bt9sb" Jul 2 00:20:07.330327 kubelet[2562]: I0702 00:20:07.330296 2562 topology_manager.go:215] "Topology Admit Handler" podUID="cc024084-5098-453a-94cf-bfb0964f844e" podNamespace="kube-system" podName="cilium-4hzlt" Jul 2 00:20:07.339212 systemd[1]: Created slice kubepods-besteffort-pod887d380c_fe75_4e75_8ad1_51de1cf5e760.slice - libcontainer container kubepods-besteffort-pod887d380c_fe75_4e75_8ad1_51de1cf5e760.slice. Jul 2 00:20:07.353606 systemd[1]: Created slice kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice - libcontainer container kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice. Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447222 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-bpf-maps\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447299 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cni-path\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447337 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-etc-cni-netd\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447356 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/887d380c-fe75-4e75-8ad1-51de1cf5e760-xtables-lock\") pod \"kube-proxy-bt9sb\" (UID: \"887d380c-fe75-4e75-8ad1-51de1cf5e760\") " pod="kube-system/kube-proxy-bt9sb" Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447371 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-run\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447674 kubelet[2562]: I0702 00:20:07.447386 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-xtables-lock\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447976 kubelet[2562]: I0702 00:20:07.447403 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc024084-5098-453a-94cf-bfb0964f844e-clustermesh-secrets\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447976 kubelet[2562]: I0702 00:20:07.447441 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-net\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447976 kubelet[2562]: I0702 00:20:07.447460 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-kernel\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447976 kubelet[2562]: I0702 00:20:07.447480 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-hubble-tls\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.447976 kubelet[2562]: I0702 00:20:07.447495 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/887d380c-fe75-4e75-8ad1-51de1cf5e760-lib-modules\") pod \"kube-proxy-bt9sb\" (UID: \"887d380c-fe75-4e75-8ad1-51de1cf5e760\") " pod="kube-system/kube-proxy-bt9sb" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447511 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc024084-5098-453a-94cf-bfb0964f844e-cilium-config-path\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447526 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-cgroup\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447545 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/887d380c-fe75-4e75-8ad1-51de1cf5e760-kube-proxy\") pod \"kube-proxy-bt9sb\" (UID: \"887d380c-fe75-4e75-8ad1-51de1cf5e760\") " pod="kube-system/kube-proxy-bt9sb" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447559 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-hostproc\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447574 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2mf\" (UniqueName: \"kubernetes.io/projected/887d380c-fe75-4e75-8ad1-51de1cf5e760-kube-api-access-7v2mf\") pod \"kube-proxy-bt9sb\" (UID: \"887d380c-fe75-4e75-8ad1-51de1cf5e760\") " pod="kube-system/kube-proxy-bt9sb" Jul 2 00:20:07.448123 kubelet[2562]: I0702 00:20:07.447597 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-lib-modules\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.448385 kubelet[2562]: I0702 00:20:07.447617 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkcg9\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-kube-api-access-lkcg9\") pod \"cilium-4hzlt\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " pod="kube-system/cilium-4hzlt" Jul 2 00:20:07.651043 kubelet[2562]: E0702 00:20:07.649737 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:07.653278 containerd[1475]: time="2024-07-02T00:20:07.653170957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bt9sb,Uid:887d380c-fe75-4e75-8ad1-51de1cf5e760,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:07.659630 kubelet[2562]: E0702 00:20:07.659577 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:07.662214 containerd[1475]: time="2024-07-02T00:20:07.660789460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hzlt,Uid:cc024084-5098-453a-94cf-bfb0964f844e,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:07.692249 containerd[1475]: time="2024-07-02T00:20:07.692000011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:07.692249 containerd[1475]: time="2024-07-02T00:20:07.692066201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:07.692249 containerd[1475]: time="2024-07-02T00:20:07.692086566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:07.692249 containerd[1475]: time="2024-07-02T00:20:07.692097364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:07.693078 containerd[1475]: time="2024-07-02T00:20:07.692957665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:07.693884 containerd[1475]: time="2024-07-02T00:20:07.693015990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:07.693884 containerd[1475]: time="2024-07-02T00:20:07.693382367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:07.693884 containerd[1475]: time="2024-07-02T00:20:07.693413626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:07.742985 systemd[1]: Started cri-containerd-93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7.scope - libcontainer container 93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7. Jul 2 00:20:07.754961 systemd[1]: Started cri-containerd-f901b8c686569048db2f7e15149f1eb9ac8a7b97aa3d3a2b9736bed8be0cffa5.scope - libcontainer container f901b8c686569048db2f7e15149f1eb9ac8a7b97aa3d3a2b9736bed8be0cffa5. Jul 2 00:20:07.805265 kubelet[2562]: I0702 00:20:07.805217 2562 topology_manager.go:215] "Topology Admit Handler" podUID="dd115817-4e8c-461e-8f88-d4b90cd86369" podNamespace="kube-system" podName="cilium-operator-599987898-xt5v9" Jul 2 00:20:07.820399 systemd[1]: Created slice kubepods-besteffort-poddd115817_4e8c_461e_8f88_d4b90cd86369.slice - libcontainer container kubepods-besteffort-poddd115817_4e8c_461e_8f88_d4b90cd86369.slice. Jul 2 00:20:07.853649 kubelet[2562]: I0702 00:20:07.851398 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/dd115817-4e8c-461e-8f88-d4b90cd86369-kube-api-access-s4plm\") pod \"cilium-operator-599987898-xt5v9\" (UID: \"dd115817-4e8c-461e-8f88-d4b90cd86369\") " pod="kube-system/cilium-operator-599987898-xt5v9" Jul 2 00:20:07.853649 kubelet[2562]: I0702 00:20:07.851481 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd115817-4e8c-461e-8f88-d4b90cd86369-cilium-config-path\") pod \"cilium-operator-599987898-xt5v9\" (UID: \"dd115817-4e8c-461e-8f88-d4b90cd86369\") " pod="kube-system/cilium-operator-599987898-xt5v9" Jul 2 00:20:07.856232 containerd[1475]: time="2024-07-02T00:20:07.856148284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hzlt,Uid:cc024084-5098-453a-94cf-bfb0964f844e,Namespace:kube-system,Attempt:0,} returns sandbox id \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\"" Jul 2 00:20:07.857566 kubelet[2562]: E0702 00:20:07.857476 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:07.869156 containerd[1475]: time="2024-07-02T00:20:07.869105600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:20:07.893473 containerd[1475]: time="2024-07-02T00:20:07.893414864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bt9sb,Uid:887d380c-fe75-4e75-8ad1-51de1cf5e760,Namespace:kube-system,Attempt:0,} returns sandbox id \"f901b8c686569048db2f7e15149f1eb9ac8a7b97aa3d3a2b9736bed8be0cffa5\"" Jul 2 00:20:07.896401 kubelet[2562]: E0702 00:20:07.896068 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:07.903058 containerd[1475]: time="2024-07-02T00:20:07.902845836Z" level=info msg="CreateContainer within sandbox \"f901b8c686569048db2f7e15149f1eb9ac8a7b97aa3d3a2b9736bed8be0cffa5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:20:07.947159 containerd[1475]: time="2024-07-02T00:20:07.947014611Z" level=info msg="CreateContainer within sandbox \"f901b8c686569048db2f7e15149f1eb9ac8a7b97aa3d3a2b9736bed8be0cffa5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"209119cd1b92030439060e6237e83a5e6393d454c9f9a856b97f20902b431d36\"" Jul 2 00:20:07.948483 containerd[1475]: time="2024-07-02T00:20:07.947719266Z" level=info msg="StartContainer for \"209119cd1b92030439060e6237e83a5e6393d454c9f9a856b97f20902b431d36\"" Jul 2 00:20:07.984034 systemd[1]: Started cri-containerd-209119cd1b92030439060e6237e83a5e6393d454c9f9a856b97f20902b431d36.scope - libcontainer container 209119cd1b92030439060e6237e83a5e6393d454c9f9a856b97f20902b431d36. Jul 2 00:20:08.014645 containerd[1475]: time="2024-07-02T00:20:08.014580981Z" level=info msg="StartContainer for \"209119cd1b92030439060e6237e83a5e6393d454c9f9a856b97f20902b431d36\" returns successfully" Jul 2 00:20:08.128455 kubelet[2562]: E0702 00:20:08.126678 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:08.128637 containerd[1475]: time="2024-07-02T00:20:08.127458326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xt5v9,Uid:dd115817-4e8c-461e-8f88-d4b90cd86369,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:08.162969 containerd[1475]: time="2024-07-02T00:20:08.162782117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:08.163299 containerd[1475]: time="2024-07-02T00:20:08.162912100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:08.163299 containerd[1475]: time="2024-07-02T00:20:08.162935125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:08.163299 containerd[1475]: time="2024-07-02T00:20:08.162951870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:08.189079 systemd[1]: Started cri-containerd-b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667.scope - libcontainer container b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667. Jul 2 00:20:08.236037 containerd[1475]: time="2024-07-02T00:20:08.235864760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xt5v9,Uid:dd115817-4e8c-461e-8f88-d4b90cd86369,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\"" Jul 2 00:20:08.237496 kubelet[2562]: E0702 00:20:08.237207 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:08.843146 kubelet[2562]: E0702 00:20:08.842896 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:08.855587 kubelet[2562]: I0702 00:20:08.855524 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bt9sb" podStartSLOduration=1.8554934379999999 podStartE2EDuration="1.855493438s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:08.854085836 +0000 UTC m=+15.219413096" watchObservedRunningTime="2024-07-02 00:20:08.855493438 +0000 UTC m=+15.220820694" Jul 2 00:20:12.907236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294926836.mount: Deactivated successfully. Jul 2 00:20:15.422736 containerd[1475]: time="2024-07-02T00:20:15.422503007Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:15.425369 containerd[1475]: time="2024-07-02T00:20:15.425286419Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735315" Jul 2 00:20:15.426077 containerd[1475]: time="2024-07-02T00:20:15.425919212Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:15.428062 containerd[1475]: time="2024-07-02T00:20:15.428016748Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.558620189s" Jul 2 00:20:15.428062 containerd[1475]: time="2024-07-02T00:20:15.428061906Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:20:15.442886 containerd[1475]: time="2024-07-02T00:20:15.442666218Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:20:15.445475 containerd[1475]: time="2024-07-02T00:20:15.444494823Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:20:15.501973 containerd[1475]: time="2024-07-02T00:20:15.501911029Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\"" Jul 2 00:20:15.502770 containerd[1475]: time="2024-07-02T00:20:15.502734919Z" level=info msg="StartContainer for \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\"" Jul 2 00:20:15.625125 systemd[1]: Started cri-containerd-e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a.scope - libcontainer container e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a. Jul 2 00:20:15.659888 containerd[1475]: time="2024-07-02T00:20:15.659788376Z" level=info msg="StartContainer for \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\" returns successfully" Jul 2 00:20:15.677316 systemd[1]: cri-containerd-e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a.scope: Deactivated successfully. Jul 2 00:20:15.793472 containerd[1475]: time="2024-07-02T00:20:15.773108445Z" level=info msg="shim disconnected" id=e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a namespace=k8s.io Jul 2 00:20:15.793472 containerd[1475]: time="2024-07-02T00:20:15.793472003Z" level=warning msg="cleaning up after shim disconnected" id=e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a namespace=k8s.io Jul 2 00:20:15.793472 containerd[1475]: time="2024-07-02T00:20:15.793488695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:15.863861 kubelet[2562]: E0702 00:20:15.863071 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:15.869007 containerd[1475]: time="2024-07-02T00:20:15.868619190Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:20:15.890080 containerd[1475]: time="2024-07-02T00:20:15.889849907Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\"" Jul 2 00:20:15.891845 containerd[1475]: time="2024-07-02T00:20:15.891075257Z" level=info msg="StartContainer for \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\"" Jul 2 00:20:15.921091 systemd[1]: Started cri-containerd-dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a.scope - libcontainer container dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a. Jul 2 00:20:15.950903 containerd[1475]: time="2024-07-02T00:20:15.950739251Z" level=info msg="StartContainer for \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\" returns successfully" Jul 2 00:20:15.969139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:20:15.969541 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:15.969642 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:15.978454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:15.978837 systemd[1]: cri-containerd-dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a.scope: Deactivated successfully. Jul 2 00:20:15.999655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:16.020459 containerd[1475]: time="2024-07-02T00:20:16.020076183Z" level=info msg="shim disconnected" id=dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a namespace=k8s.io Jul 2 00:20:16.020459 containerd[1475]: time="2024-07-02T00:20:16.020182904Z" level=warning msg="cleaning up after shim disconnected" id=dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a namespace=k8s.io Jul 2 00:20:16.020459 containerd[1475]: time="2024-07-02T00:20:16.020199884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:16.493159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a-rootfs.mount: Deactivated successfully. Jul 2 00:20:16.790615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681081391.mount: Deactivated successfully. Jul 2 00:20:16.866136 kubelet[2562]: E0702 00:20:16.865624 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:16.873265 containerd[1475]: time="2024-07-02T00:20:16.873021739Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:20:16.909648 containerd[1475]: time="2024-07-02T00:20:16.909340512Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\"" Jul 2 00:20:16.913049 containerd[1475]: time="2024-07-02T00:20:16.911753499Z" level=info msg="StartContainer for \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\"" Jul 2 00:20:16.958211 systemd[1]: Started cri-containerd-6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4.scope - libcontainer container 6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4. Jul 2 00:20:17.003871 containerd[1475]: time="2024-07-02T00:20:17.003723512Z" level=info msg="StartContainer for \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\" returns successfully" Jul 2 00:20:17.009709 systemd[1]: cri-containerd-6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4.scope: Deactivated successfully. Jul 2 00:20:17.045659 containerd[1475]: time="2024-07-02T00:20:17.044959501Z" level=info msg="shim disconnected" id=6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4 namespace=k8s.io Jul 2 00:20:17.045659 containerd[1475]: time="2024-07-02T00:20:17.045015940Z" level=warning msg="cleaning up after shim disconnected" id=6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4 namespace=k8s.io Jul 2 00:20:17.045659 containerd[1475]: time="2024-07-02T00:20:17.045024838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:17.062838 containerd[1475]: time="2024-07-02T00:20:17.062287485Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:20:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:20:17.472157 containerd[1475]: time="2024-07-02T00:20:17.471974987Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:17.472917 containerd[1475]: time="2024-07-02T00:20:17.472875108Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Jul 2 00:20:17.474146 containerd[1475]: time="2024-07-02T00:20:17.474104026Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:17.475647 containerd[1475]: time="2024-07-02T00:20:17.475605021Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.032890634s" Jul 2 00:20:17.475647 containerd[1475]: time="2024-07-02T00:20:17.475649590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:20:17.482347 containerd[1475]: time="2024-07-02T00:20:17.482293099Z" level=info msg="CreateContainer within sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:20:17.492783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538258157.mount: Deactivated successfully. Jul 2 00:20:17.494783 containerd[1475]: time="2024-07-02T00:20:17.494745584Z" level=info msg="CreateContainer within sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\"" Jul 2 00:20:17.496833 containerd[1475]: time="2024-07-02T00:20:17.496000278Z" level=info msg="StartContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\"" Jul 2 00:20:17.534030 systemd[1]: Started cri-containerd-1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8.scope - libcontainer container 1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8. Jul 2 00:20:17.563549 containerd[1475]: time="2024-07-02T00:20:17.563410217Z" level=info msg="StartContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" returns successfully" Jul 2 00:20:17.869618 kubelet[2562]: E0702 00:20:17.869572 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:17.873666 kubelet[2562]: E0702 00:20:17.872835 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:17.874778 containerd[1475]: time="2024-07-02T00:20:17.874719890Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:20:17.887076 containerd[1475]: time="2024-07-02T00:20:17.886987116Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\"" Jul 2 00:20:17.888373 containerd[1475]: time="2024-07-02T00:20:17.888334712Z" level=info msg="StartContainer for \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\"" Jul 2 00:20:17.900951 kubelet[2562]: I0702 00:20:17.900384 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xt5v9" podStartSLOduration=1.662569293 podStartE2EDuration="10.900360083s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="2024-07-02 00:20:08.238846225 +0000 UTC m=+14.604173460" lastFinishedPulling="2024-07-02 00:20:17.476637012 +0000 UTC m=+23.841964250" observedRunningTime="2024-07-02 00:20:17.896609322 +0000 UTC m=+24.261936579" watchObservedRunningTime="2024-07-02 00:20:17.900360083 +0000 UTC m=+24.265687338" Jul 2 00:20:17.940026 systemd[1]: Started cri-containerd-c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27.scope - libcontainer container c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27. Jul 2 00:20:17.979398 systemd[1]: cri-containerd-c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27.scope: Deactivated successfully. Jul 2 00:20:17.987896 containerd[1475]: time="2024-07-02T00:20:17.986304147Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice/cri-containerd-c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27.scope/memory.events\": no such file or directory" Jul 2 00:20:17.988625 containerd[1475]: time="2024-07-02T00:20:17.988352954Z" level=info msg="StartContainer for \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\" returns successfully" Jul 2 00:20:18.024164 containerd[1475]: time="2024-07-02T00:20:18.024022556Z" level=info msg="shim disconnected" id=c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27 namespace=k8s.io Jul 2 00:20:18.024563 containerd[1475]: time="2024-07-02T00:20:18.024424593Z" level=warning msg="cleaning up after shim disconnected" id=c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27 namespace=k8s.io Jul 2 00:20:18.024563 containerd[1475]: time="2024-07-02T00:20:18.024441893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:18.879287 kubelet[2562]: E0702 00:20:18.879230 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:18.880044 kubelet[2562]: E0702 00:20:18.879948 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:18.883479 containerd[1475]: time="2024-07-02T00:20:18.883145646Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:20:18.925632 containerd[1475]: time="2024-07-02T00:20:18.925577263Z" level=info msg="CreateContainer within sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\"" Jul 2 00:20:18.928699 containerd[1475]: time="2024-07-02T00:20:18.928608317Z" level=info msg="StartContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\"" Jul 2 00:20:18.972022 systemd[1]: Started cri-containerd-3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38.scope - libcontainer container 3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38. Jul 2 00:20:19.009773 containerd[1475]: time="2024-07-02T00:20:19.009656145Z" level=info msg="StartContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" returns successfully" Jul 2 00:20:19.245607 kubelet[2562]: I0702 00:20:19.245205 2562 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:20:19.286353 kubelet[2562]: I0702 00:20:19.286114 2562 topology_manager.go:215] "Topology Admit Handler" podUID="12c9df01-4d74-4db5-98b4-f0d343490788" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f658k" Jul 2 00:20:19.287969 kubelet[2562]: I0702 00:20:19.286616 2562 topology_manager.go:215] "Topology Admit Handler" podUID="19cd4bf8-cb7e-474c-b706-5e6ba1673e57" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4xl7f" Jul 2 00:20:19.299255 systemd[1]: Created slice kubepods-burstable-pod12c9df01_4d74_4db5_98b4_f0d343490788.slice - libcontainer container kubepods-burstable-pod12c9df01_4d74_4db5_98b4_f0d343490788.slice. Jul 2 00:20:19.319754 systemd[1]: Created slice kubepods-burstable-pod19cd4bf8_cb7e_474c_b706_5e6ba1673e57.slice - libcontainer container kubepods-burstable-pod19cd4bf8_cb7e_474c_b706_5e6ba1673e57.slice. Jul 2 00:20:19.443104 kubelet[2562]: I0702 00:20:19.443013 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19cd4bf8-cb7e-474c-b706-5e6ba1673e57-config-volume\") pod \"coredns-7db6d8ff4d-4xl7f\" (UID: \"19cd4bf8-cb7e-474c-b706-5e6ba1673e57\") " pod="kube-system/coredns-7db6d8ff4d-4xl7f" Jul 2 00:20:19.443897 kubelet[2562]: I0702 00:20:19.443691 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx9s\" (UniqueName: \"kubernetes.io/projected/19cd4bf8-cb7e-474c-b706-5e6ba1673e57-kube-api-access-wvx9s\") pod \"coredns-7db6d8ff4d-4xl7f\" (UID: \"19cd4bf8-cb7e-474c-b706-5e6ba1673e57\") " pod="kube-system/coredns-7db6d8ff4d-4xl7f" Jul 2 00:20:19.443897 kubelet[2562]: I0702 00:20:19.443757 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12c9df01-4d74-4db5-98b4-f0d343490788-config-volume\") pod \"coredns-7db6d8ff4d-f658k\" (UID: \"12c9df01-4d74-4db5-98b4-f0d343490788\") " pod="kube-system/coredns-7db6d8ff4d-f658k" Jul 2 00:20:19.443897 kubelet[2562]: I0702 00:20:19.443798 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7qn\" (UniqueName: \"kubernetes.io/projected/12c9df01-4d74-4db5-98b4-f0d343490788-kube-api-access-wh7qn\") pod \"coredns-7db6d8ff4d-f658k\" (UID: \"12c9df01-4d74-4db5-98b4-f0d343490788\") " pod="kube-system/coredns-7db6d8ff4d-f658k" Jul 2 00:20:19.609430 kubelet[2562]: E0702 00:20:19.609279 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:19.610771 containerd[1475]: time="2024-07-02T00:20:19.610714574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f658k,Uid:12c9df01-4d74-4db5-98b4-f0d343490788,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:19.631904 kubelet[2562]: E0702 00:20:19.631303 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:19.636186 containerd[1475]: time="2024-07-02T00:20:19.636130450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xl7f,Uid:19cd4bf8-cb7e-474c-b706-5e6ba1673e57,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:19.888093 kubelet[2562]: E0702 00:20:19.887966 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:19.911056 kubelet[2562]: I0702 00:20:19.910981 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4hzlt" podStartSLOduration=5.333272385 podStartE2EDuration="12.910958719s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="2024-07-02 00:20:07.864266197 +0000 UTC m=+14.229593432" lastFinishedPulling="2024-07-02 00:20:15.441952533 +0000 UTC m=+21.807279766" observedRunningTime="2024-07-02 00:20:19.90889333 +0000 UTC m=+26.274220588" watchObservedRunningTime="2024-07-02 00:20:19.910958719 +0000 UTC m=+26.276285974" Jul 2 00:20:20.888575 kubelet[2562]: E0702 00:20:20.888477 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:21.589595 systemd-networkd[1368]: cilium_host: Link UP Jul 2 00:20:21.591584 systemd-networkd[1368]: cilium_net: Link UP Jul 2 00:20:21.592235 systemd-networkd[1368]: cilium_net: Gained carrier Jul 2 00:20:21.592464 systemd-networkd[1368]: cilium_host: Gained carrier Jul 2 00:20:21.703895 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jul 2 00:20:21.737468 systemd-networkd[1368]: cilium_vxlan: Link UP Jul 2 00:20:21.737476 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jul 2 00:20:21.890843 kubelet[2562]: E0702 00:20:21.890587 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:22.125860 kernel: NET: Registered PF_ALG protocol family Jul 2 00:20:22.430919 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jul 2 00:20:22.940910 systemd-networkd[1368]: lxc_health: Link UP Jul 2 00:20:22.946385 systemd-networkd[1368]: lxc_health: Gained carrier Jul 2 00:20:23.007121 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jul 2 00:20:23.213743 systemd-networkd[1368]: lxc0e82410b3459: Link UP Jul 2 00:20:23.218128 kernel: eth0: renamed from tmpdfc96 Jul 2 00:20:23.223438 systemd-networkd[1368]: lxc0e82410b3459: Gained carrier Jul 2 00:20:23.262675 systemd-networkd[1368]: lxc2a778d4c5468: Link UP Jul 2 00:20:23.273111 kernel: eth0: renamed from tmp030c5 Jul 2 00:20:23.284243 systemd-networkd[1368]: lxc2a778d4c5468: Gained carrier Jul 2 00:20:23.662302 kubelet[2562]: E0702 00:20:23.662255 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:24.093996 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 2 00:20:24.862408 systemd-networkd[1368]: lxc0e82410b3459: Gained IPv6LL Jul 2 00:20:24.926062 systemd-networkd[1368]: lxc2a778d4c5468: Gained IPv6LL Jul 2 00:20:27.916795 containerd[1475]: time="2024-07-02T00:20:27.916554148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:27.916795 containerd[1475]: time="2024-07-02T00:20:27.916619212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:27.916795 containerd[1475]: time="2024-07-02T00:20:27.916638970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:27.916795 containerd[1475]: time="2024-07-02T00:20:27.916656958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:27.929102 containerd[1475]: time="2024-07-02T00:20:27.927900882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:27.929102 containerd[1475]: time="2024-07-02T00:20:27.928201877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:27.929102 containerd[1475]: time="2024-07-02T00:20:27.928223764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:27.929102 containerd[1475]: time="2024-07-02T00:20:27.928233409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:27.983028 systemd[1]: Started cri-containerd-030c506835cd8ef58ec2988db224cfa545585783bbc20aa67a1ae5630092161c.scope - libcontainer container 030c506835cd8ef58ec2988db224cfa545585783bbc20aa67a1ae5630092161c. Jul 2 00:20:27.985976 systemd[1]: Started cri-containerd-dfc96a32c0f455e719f7012ea716ea2bc5d0199f8d049deba717c1a59d120d46.scope - libcontainer container dfc96a32c0f455e719f7012ea716ea2bc5d0199f8d049deba717c1a59d120d46. Jul 2 00:20:28.064383 containerd[1475]: time="2024-07-02T00:20:28.064336572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f658k,Uid:12c9df01-4d74-4db5-98b4-f0d343490788,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfc96a32c0f455e719f7012ea716ea2bc5d0199f8d049deba717c1a59d120d46\"" Jul 2 00:20:28.066625 kubelet[2562]: E0702 00:20:28.066508 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:28.074966 containerd[1475]: time="2024-07-02T00:20:28.074437245Z" level=info msg="CreateContainer within sandbox \"dfc96a32c0f455e719f7012ea716ea2bc5d0199f8d049deba717c1a59d120d46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:20:28.100561 containerd[1475]: time="2024-07-02T00:20:28.100276736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4xl7f,Uid:19cd4bf8-cb7e-474c-b706-5e6ba1673e57,Namespace:kube-system,Attempt:0,} returns sandbox id \"030c506835cd8ef58ec2988db224cfa545585783bbc20aa67a1ae5630092161c\"" Jul 2 00:20:28.100561 containerd[1475]: time="2024-07-02T00:20:28.100464049Z" level=info msg="CreateContainer within sandbox \"dfc96a32c0f455e719f7012ea716ea2bc5d0199f8d049deba717c1a59d120d46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36533ffedd8c2645a12ac433e0c0e82b80abb2fc5039a58f7f3ff06091d84ad1\"" Jul 2 00:20:28.101196 containerd[1475]: time="2024-07-02T00:20:28.101062746Z" level=info msg="StartContainer for \"36533ffedd8c2645a12ac433e0c0e82b80abb2fc5039a58f7f3ff06091d84ad1\"" Jul 2 00:20:28.102455 kubelet[2562]: E0702 00:20:28.102274 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:28.106247 containerd[1475]: time="2024-07-02T00:20:28.105849538Z" level=info msg="CreateContainer within sandbox \"030c506835cd8ef58ec2988db224cfa545585783bbc20aa67a1ae5630092161c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:20:28.117412 containerd[1475]: time="2024-07-02T00:20:28.117332834Z" level=info msg="CreateContainer within sandbox \"030c506835cd8ef58ec2988db224cfa545585783bbc20aa67a1ae5630092161c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed944ad9736ff5c1f7cf1654a962226be70ee7a91ec6b69c8d6a79cdc597fb1f\"" Jul 2 00:20:28.118639 containerd[1475]: time="2024-07-02T00:20:28.118435720Z" level=info msg="StartContainer for \"ed944ad9736ff5c1f7cf1654a962226be70ee7a91ec6b69c8d6a79cdc597fb1f\"" Jul 2 00:20:28.145078 systemd[1]: Started cri-containerd-36533ffedd8c2645a12ac433e0c0e82b80abb2fc5039a58f7f3ff06091d84ad1.scope - libcontainer container 36533ffedd8c2645a12ac433e0c0e82b80abb2fc5039a58f7f3ff06091d84ad1. Jul 2 00:20:28.163269 systemd[1]: Started cri-containerd-ed944ad9736ff5c1f7cf1654a962226be70ee7a91ec6b69c8d6a79cdc597fb1f.scope - libcontainer container ed944ad9736ff5c1f7cf1654a962226be70ee7a91ec6b69c8d6a79cdc597fb1f. Jul 2 00:20:28.208229 containerd[1475]: time="2024-07-02T00:20:28.208064573Z" level=info msg="StartContainer for \"ed944ad9736ff5c1f7cf1654a962226be70ee7a91ec6b69c8d6a79cdc597fb1f\" returns successfully" Jul 2 00:20:28.209513 containerd[1475]: time="2024-07-02T00:20:28.209028689Z" level=info msg="StartContainer for \"36533ffedd8c2645a12ac433e0c0e82b80abb2fc5039a58f7f3ff06091d84ad1\" returns successfully" Jul 2 00:20:28.910947 kubelet[2562]: E0702 00:20:28.910901 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:28.915631 kubelet[2562]: E0702 00:20:28.915476 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:28.927063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846503924.mount: Deactivated successfully. Jul 2 00:20:28.934578 kubelet[2562]: I0702 00:20:28.933480 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4xl7f" podStartSLOduration=21.93346009 podStartE2EDuration="21.93346009s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:28.932773455 +0000 UTC m=+35.298100711" watchObservedRunningTime="2024-07-02 00:20:28.93346009 +0000 UTC m=+35.298787346" Jul 2 00:20:28.959676 kubelet[2562]: I0702 00:20:28.959616 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f658k" podStartSLOduration=21.959597344 podStartE2EDuration="21.959597344s" podCreationTimestamp="2024-07-02 00:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:28.953096849 +0000 UTC m=+35.318424105" watchObservedRunningTime="2024-07-02 00:20:28.959597344 +0000 UTC m=+35.324924599" Jul 2 00:20:29.918036 kubelet[2562]: E0702 00:20:29.917740 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:29.918036 kubelet[2562]: E0702 00:20:29.917914 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:30.922166 kubelet[2562]: E0702 00:20:30.921602 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:30.922166 kubelet[2562]: E0702 00:20:30.922084 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:35.496386 kubelet[2562]: I0702 00:20:35.496062 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:20:35.497591 kubelet[2562]: E0702 00:20:35.496882 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:35.933867 kubelet[2562]: E0702 00:20:35.933704 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:20:45.186240 systemd[1]: Started sshd@9-146.190.126.73:22-147.75.109.163:34158.service - OpenSSH per-connection server daemon (147.75.109.163:34158). Jul 2 00:20:45.257869 sshd[3949]: Accepted publickey for core from 147.75.109.163 port 34158 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:45.260637 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:45.271843 systemd-logind[1456]: New session 10 of user core. Jul 2 00:20:45.275165 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:20:45.867294 sshd[3949]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:45.870418 systemd[1]: sshd@9-146.190.126.73:22-147.75.109.163:34158.service: Deactivated successfully. Jul 2 00:20:45.873256 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:20:45.875791 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:20:45.877131 systemd-logind[1456]: Removed session 10. Jul 2 00:20:50.886179 systemd[1]: Started sshd@10-146.190.126.73:22-147.75.109.163:34162.service - OpenSSH per-connection server daemon (147.75.109.163:34162). Jul 2 00:20:50.940550 sshd[3964]: Accepted publickey for core from 147.75.109.163 port 34162 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:50.943375 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:50.951384 systemd-logind[1456]: New session 11 of user core. Jul 2 00:20:50.957156 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:20:51.105946 sshd[3964]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:51.110686 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:20:51.112270 systemd[1]: sshd@10-146.190.126.73:22-147.75.109.163:34162.service: Deactivated successfully. Jul 2 00:20:51.114811 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:20:51.116914 systemd-logind[1456]: Removed session 11. Jul 2 00:20:56.124155 systemd[1]: Started sshd@11-146.190.126.73:22-147.75.109.163:44928.service - OpenSSH per-connection server daemon (147.75.109.163:44928). Jul 2 00:20:56.167788 sshd[3979]: Accepted publickey for core from 147.75.109.163 port 44928 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:20:56.169559 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:56.174068 systemd-logind[1456]: New session 12 of user core. Jul 2 00:20:56.183065 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:20:56.325231 sshd[3979]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:56.329741 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:20:56.330893 systemd[1]: sshd@11-146.190.126.73:22-147.75.109.163:44928.service: Deactivated successfully. Jul 2 00:20:56.333173 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:20:56.336019 systemd-logind[1456]: Removed session 12. Jul 2 00:21:01.344280 systemd[1]: Started sshd@12-146.190.126.73:22-147.75.109.163:44944.service - OpenSSH per-connection server daemon (147.75.109.163:44944). Jul 2 00:21:01.396303 sshd[3993]: Accepted publickey for core from 147.75.109.163 port 44944 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:01.398084 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:01.404766 systemd-logind[1456]: New session 13 of user core. Jul 2 00:21:01.409160 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:21:01.551014 sshd[3993]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:01.561705 systemd[1]: sshd@12-146.190.126.73:22-147.75.109.163:44944.service: Deactivated successfully. Jul 2 00:21:01.564251 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:21:01.567276 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:21:01.573251 systemd[1]: Started sshd@13-146.190.126.73:22-147.75.109.163:44950.service - OpenSSH per-connection server daemon (147.75.109.163:44950). Jul 2 00:21:01.575685 systemd-logind[1456]: Removed session 13. Jul 2 00:21:01.618627 sshd[4007]: Accepted publickey for core from 147.75.109.163 port 44950 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:01.620356 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:01.628117 systemd-logind[1456]: New session 14 of user core. Jul 2 00:21:01.633238 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:21:01.855079 sshd[4007]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:01.869776 systemd[1]: sshd@13-146.190.126.73:22-147.75.109.163:44950.service: Deactivated successfully. Jul 2 00:21:01.876252 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:21:01.882543 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:21:01.892461 systemd[1]: Started sshd@14-146.190.126.73:22-147.75.109.163:44962.service - OpenSSH per-connection server daemon (147.75.109.163:44962). Jul 2 00:21:01.897267 systemd-logind[1456]: Removed session 14. Jul 2 00:21:01.950291 sshd[4018]: Accepted publickey for core from 147.75.109.163 port 44962 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:01.953221 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:01.959575 systemd-logind[1456]: New session 15 of user core. Jul 2 00:21:01.966193 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:21:02.121269 sshd[4018]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:02.127752 systemd[1]: sshd@14-146.190.126.73:22-147.75.109.163:44962.service: Deactivated successfully. Jul 2 00:21:02.130537 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:21:02.131650 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:21:02.133898 systemd-logind[1456]: Removed session 15. Jul 2 00:21:07.140167 systemd[1]: Started sshd@15-146.190.126.73:22-147.75.109.163:54846.service - OpenSSH per-connection server daemon (147.75.109.163:54846). Jul 2 00:21:07.181569 sshd[4031]: Accepted publickey for core from 147.75.109.163 port 54846 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:07.183476 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:07.190169 systemd-logind[1456]: New session 16 of user core. Jul 2 00:21:07.196094 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:21:07.323275 sshd[4031]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:07.327589 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:21:07.328374 systemd[1]: sshd@15-146.190.126.73:22-147.75.109.163:54846.service: Deactivated successfully. Jul 2 00:21:07.330931 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:21:07.332664 systemd-logind[1456]: Removed session 16. Jul 2 00:21:09.783839 kubelet[2562]: E0702 00:21:09.783135 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:12.342210 systemd[1]: Started sshd@16-146.190.126.73:22-147.75.109.163:54854.service - OpenSSH per-connection server daemon (147.75.109.163:54854). Jul 2 00:21:12.383944 sshd[4045]: Accepted publickey for core from 147.75.109.163 port 54854 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:12.385609 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:12.390941 systemd-logind[1456]: New session 17 of user core. Jul 2 00:21:12.397142 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:21:12.526338 sshd[4045]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:12.531254 systemd[1]: sshd@16-146.190.126.73:22-147.75.109.163:54854.service: Deactivated successfully. Jul 2 00:21:12.533192 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:21:12.534179 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:21:12.536023 systemd-logind[1456]: Removed session 17. Jul 2 00:21:17.546939 systemd[1]: Started sshd@17-146.190.126.73:22-147.75.109.163:43872.service - OpenSSH per-connection server daemon (147.75.109.163:43872). Jul 2 00:21:17.609002 sshd[4058]: Accepted publickey for core from 147.75.109.163 port 43872 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:17.611152 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:17.616958 systemd-logind[1456]: New session 18 of user core. Jul 2 00:21:17.627083 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:21:17.764524 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:17.775253 systemd[1]: sshd@17-146.190.126.73:22-147.75.109.163:43872.service: Deactivated successfully. Jul 2 00:21:17.777859 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:21:17.779456 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:21:17.790391 systemd[1]: Started sshd@18-146.190.126.73:22-147.75.109.163:43884.service - OpenSSH per-connection server daemon (147.75.109.163:43884). Jul 2 00:21:17.792625 systemd-logind[1456]: Removed session 18. Jul 2 00:21:17.830709 sshd[4071]: Accepted publickey for core from 147.75.109.163 port 43884 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:17.832572 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:17.838564 systemd-logind[1456]: New session 19 of user core. Jul 2 00:21:17.853175 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:21:18.152992 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:18.160738 systemd[1]: sshd@18-146.190.126.73:22-147.75.109.163:43884.service: Deactivated successfully. Jul 2 00:21:18.162892 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:21:18.163660 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:21:18.172136 systemd[1]: Started sshd@19-146.190.126.73:22-147.75.109.163:43888.service - OpenSSH per-connection server daemon (147.75.109.163:43888). Jul 2 00:21:18.174123 systemd-logind[1456]: Removed session 19. Jul 2 00:21:18.252579 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 43888 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:18.254711 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:18.260325 systemd-logind[1456]: New session 20 of user core. Jul 2 00:21:18.270064 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:21:19.965561 sshd[4082]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:19.985182 systemd[1]: Started sshd@20-146.190.126.73:22-147.75.109.163:43898.service - OpenSSH per-connection server daemon (147.75.109.163:43898). Jul 2 00:21:19.985727 systemd[1]: sshd@19-146.190.126.73:22-147.75.109.163:43888.service: Deactivated successfully. Jul 2 00:21:19.990329 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:21:19.994120 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:21:20.001977 systemd-logind[1456]: Removed session 20. Jul 2 00:21:20.049082 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 43898 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:20.050673 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:20.057412 systemd-logind[1456]: New session 21 of user core. Jul 2 00:21:20.063059 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:21:20.344197 sshd[4096]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:20.354943 systemd[1]: sshd@20-146.190.126.73:22-147.75.109.163:43898.service: Deactivated successfully. Jul 2 00:21:20.358434 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:21:20.361528 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:21:20.367290 systemd[1]: Started sshd@21-146.190.126.73:22-147.75.109.163:43906.service - OpenSSH per-connection server daemon (147.75.109.163:43906). Jul 2 00:21:20.371256 systemd-logind[1456]: Removed session 21. Jul 2 00:21:20.421284 sshd[4111]: Accepted publickey for core from 147.75.109.163 port 43906 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:20.423316 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:20.428607 systemd-logind[1456]: New session 22 of user core. Jul 2 00:21:20.438086 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:21:20.565080 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:20.569038 systemd[1]: sshd@21-146.190.126.73:22-147.75.109.163:43906.service: Deactivated successfully. Jul 2 00:21:20.571273 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:21:20.572448 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:21:20.573408 systemd-logind[1456]: Removed session 22. Jul 2 00:21:21.782135 kubelet[2562]: E0702 00:21:21.781542 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:25.584241 systemd[1]: Started sshd@22-146.190.126.73:22-147.75.109.163:43588.service - OpenSSH per-connection server daemon (147.75.109.163:43588). Jul 2 00:21:25.633074 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 43588 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:25.634675 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:25.639869 systemd-logind[1456]: New session 23 of user core. Jul 2 00:21:25.648033 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:21:25.776073 sshd[4123]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:25.780014 systemd[1]: sshd@22-146.190.126.73:22-147.75.109.163:43588.service: Deactivated successfully. Jul 2 00:21:25.782364 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:21:25.785677 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:21:25.787071 systemd-logind[1456]: Removed session 23. Jul 2 00:21:30.805285 systemd[1]: Started sshd@23-146.190.126.73:22-147.75.109.163:43598.service - OpenSSH per-connection server daemon (147.75.109.163:43598). Jul 2 00:21:30.845073 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 43598 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:30.846579 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:30.851678 systemd-logind[1456]: New session 24 of user core. Jul 2 00:21:30.859156 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:21:30.989214 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:30.994913 systemd[1]: sshd@23-146.190.126.73:22-147.75.109.163:43598.service: Deactivated successfully. Jul 2 00:21:30.997467 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:21:30.999245 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:21:31.000344 systemd-logind[1456]: Removed session 24. Jul 2 00:21:31.782557 kubelet[2562]: E0702 00:21:31.781163 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:32.781169 kubelet[2562]: E0702 00:21:32.781058 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:36.005232 systemd[1]: Started sshd@24-146.190.126.73:22-147.75.109.163:53668.service - OpenSSH per-connection server daemon (147.75.109.163:53668). Jul 2 00:21:36.079300 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 53668 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:36.082118 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:36.087578 systemd-logind[1456]: New session 25 of user core. Jul 2 00:21:36.095033 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:21:36.221087 sshd[4152]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:36.225039 systemd[1]: sshd@24-146.190.126.73:22-147.75.109.163:53668.service: Deactivated successfully. Jul 2 00:21:36.227059 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:21:36.228055 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:21:36.229538 systemd-logind[1456]: Removed session 25. Jul 2 00:21:40.781356 kubelet[2562]: E0702 00:21:40.781258 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:41.239174 systemd[1]: Started sshd@25-146.190.126.73:22-147.75.109.163:53678.service - OpenSSH per-connection server daemon (147.75.109.163:53678). Jul 2 00:21:41.279059 sshd[4168]: Accepted publickey for core from 147.75.109.163 port 53678 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:41.280959 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:41.285794 systemd-logind[1456]: New session 26 of user core. Jul 2 00:21:41.294125 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:21:41.425725 sshd[4168]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:41.429790 systemd[1]: sshd@25-146.190.126.73:22-147.75.109.163:53678.service: Deactivated successfully. Jul 2 00:21:41.432777 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:21:41.435133 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:21:41.437437 systemd-logind[1456]: Removed session 26. Jul 2 00:21:46.446226 systemd[1]: Started sshd@26-146.190.126.73:22-147.75.109.163:43914.service - OpenSSH per-connection server daemon (147.75.109.163:43914). Jul 2 00:21:46.494530 sshd[4180]: Accepted publickey for core from 147.75.109.163 port 43914 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:46.497202 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:46.503384 systemd-logind[1456]: New session 27 of user core. Jul 2 00:21:46.509583 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:21:46.639356 sshd[4180]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:46.644405 systemd[1]: sshd@26-146.190.126.73:22-147.75.109.163:43914.service: Deactivated successfully. Jul 2 00:21:46.646577 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:21:46.649634 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:21:46.650992 systemd-logind[1456]: Removed session 27. Jul 2 00:21:47.781392 kubelet[2562]: E0702 00:21:47.781090 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:47.782848 kubelet[2562]: E0702 00:21:47.782550 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:51.069174 systemd[1]: Started sshd@27-146.190.126.73:22-141.98.10.125:58998.service - OpenSSH per-connection server daemon (141.98.10.125:58998). Jul 2 00:21:51.661212 systemd[1]: Started sshd@28-146.190.126.73:22-147.75.109.163:43916.service - OpenSSH per-connection server daemon (147.75.109.163:43916). Jul 2 00:21:51.705508 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 43916 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:51.707555 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:51.714019 systemd-logind[1456]: New session 28 of user core. Jul 2 00:21:51.720164 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:21:51.802737 sshd[4193]: Invalid user bmp from 141.98.10.125 port 58998 Jul 2 00:21:51.856617 sshd[4196]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:51.865938 systemd[1]: sshd@28-146.190.126.73:22-147.75.109.163:43916.service: Deactivated successfully. Jul 2 00:21:51.869559 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:21:51.873671 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:21:51.881353 systemd[1]: Started sshd@29-146.190.126.73:22-147.75.109.163:43930.service - OpenSSH per-connection server daemon (147.75.109.163:43930). Jul 2 00:21:51.883625 systemd-logind[1456]: Removed session 28. Jul 2 00:21:51.931179 sshd[4208]: Accepted publickey for core from 147.75.109.163 port 43930 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:51.932702 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:51.938878 systemd-logind[1456]: New session 29 of user core. Jul 2 00:21:51.950075 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:21:51.981346 sshd[4193]: Connection closed by invalid user bmp 141.98.10.125 port 58998 [preauth] Jul 2 00:21:51.983039 systemd[1]: sshd@27-146.190.126.73:22-141.98.10.125:58998.service: Deactivated successfully. Jul 2 00:21:52.781056 kubelet[2562]: E0702 00:21:52.780979 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:53.460454 containerd[1475]: time="2024-07-02T00:21:53.460393474Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:21:53.467443 containerd[1475]: time="2024-07-02T00:21:53.467284658Z" level=info msg="StopContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" with timeout 30 (s)" Jul 2 00:21:53.468856 containerd[1475]: time="2024-07-02T00:21:53.467691756Z" level=info msg="StopContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" with timeout 2 (s)" Jul 2 00:21:53.474098 containerd[1475]: time="2024-07-02T00:21:53.473748510Z" level=info msg="Stop container \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" with signal terminated" Jul 2 00:21:53.474768 containerd[1475]: time="2024-07-02T00:21:53.474664921Z" level=info msg="Stop container \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" with signal terminated" Jul 2 00:21:53.488270 systemd-networkd[1368]: lxc_health: Link DOWN Jul 2 00:21:53.488278 systemd-networkd[1368]: lxc_health: Lost carrier Jul 2 00:21:53.504787 systemd[1]: cri-containerd-1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8.scope: Deactivated successfully. Jul 2 00:21:53.519352 systemd[1]: cri-containerd-3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38.scope: Deactivated successfully. Jul 2 00:21:53.519678 systemd[1]: cri-containerd-3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38.scope: Consumed 8.380s CPU time. Jul 2 00:21:53.557651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8-rootfs.mount: Deactivated successfully. Jul 2 00:21:53.564828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38-rootfs.mount: Deactivated successfully. Jul 2 00:21:53.571427 containerd[1475]: time="2024-07-02T00:21:53.570966875Z" level=info msg="shim disconnected" id=1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8 namespace=k8s.io Jul 2 00:21:53.571427 containerd[1475]: time="2024-07-02T00:21:53.571316335Z" level=warning msg="cleaning up after shim disconnected" id=1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8 namespace=k8s.io Jul 2 00:21:53.571427 containerd[1475]: time="2024-07-02T00:21:53.571334123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:53.572289 containerd[1475]: time="2024-07-02T00:21:53.571336762Z" level=info msg="shim disconnected" id=3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38 namespace=k8s.io Jul 2 00:21:53.572289 containerd[1475]: time="2024-07-02T00:21:53.571478450Z" level=warning msg="cleaning up after shim disconnected" id=3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38 namespace=k8s.io Jul 2 00:21:53.572289 containerd[1475]: time="2024-07-02T00:21:53.571488654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:53.595251 containerd[1475]: time="2024-07-02T00:21:53.594738128Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:21:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:21:53.601552 containerd[1475]: time="2024-07-02T00:21:53.601244241Z" level=info msg="StopContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" returns successfully" Jul 2 00:21:53.604845 containerd[1475]: time="2024-07-02T00:21:53.602359434Z" level=info msg="StopPodSandbox for \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\"" Jul 2 00:21:53.604845 containerd[1475]: time="2024-07-02T00:21:53.602444177Z" level=info msg="Container to stop \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.606383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667-shm.mount: Deactivated successfully. Jul 2 00:21:53.607574 containerd[1475]: time="2024-07-02T00:21:53.607426271Z" level=info msg="StopContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" returns successfully" Jul 2 00:21:53.608291 containerd[1475]: time="2024-07-02T00:21:53.608214403Z" level=info msg="StopPodSandbox for \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\"" Jul 2 00:21:53.608543 containerd[1475]: time="2024-07-02T00:21:53.608262003Z" level=info msg="Container to stop \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.608637 containerd[1475]: time="2024-07-02T00:21:53.608621911Z" level=info msg="Container to stop \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.609910 containerd[1475]: time="2024-07-02T00:21:53.608689128Z" level=info msg="Container to stop \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.609910 containerd[1475]: time="2024-07-02T00:21:53.608702408Z" level=info msg="Container to stop \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.609910 containerd[1475]: time="2024-07-02T00:21:53.608712581Z" level=info msg="Container to stop \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:21:53.611786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7-shm.mount: Deactivated successfully. Jul 2 00:21:53.619728 systemd[1]: cri-containerd-93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7.scope: Deactivated successfully. Jul 2 00:21:53.621566 systemd[1]: cri-containerd-b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667.scope: Deactivated successfully. Jul 2 00:21:53.656469 containerd[1475]: time="2024-07-02T00:21:53.656232001Z" level=info msg="shim disconnected" id=93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7 namespace=k8s.io Jul 2 00:21:53.656469 containerd[1475]: time="2024-07-02T00:21:53.656297052Z" level=warning msg="cleaning up after shim disconnected" id=93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7 namespace=k8s.io Jul 2 00:21:53.656469 containerd[1475]: time="2024-07-02T00:21:53.656312121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:53.660730 containerd[1475]: time="2024-07-02T00:21:53.660545528Z" level=info msg="shim disconnected" id=b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667 namespace=k8s.io Jul 2 00:21:53.660730 containerd[1475]: time="2024-07-02T00:21:53.660626636Z" level=warning msg="cleaning up after shim disconnected" id=b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667 namespace=k8s.io Jul 2 00:21:53.660730 containerd[1475]: time="2024-07-02T00:21:53.660635484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:53.681541 containerd[1475]: time="2024-07-02T00:21:53.680912886Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:21:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:21:53.681541 containerd[1475]: time="2024-07-02T00:21:53.681390812Z" level=info msg="TearDown network for sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" successfully" Jul 2 00:21:53.681541 containerd[1475]: time="2024-07-02T00:21:53.681415391Z" level=info msg="StopPodSandbox for \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" returns successfully" Jul 2 00:21:53.682499 containerd[1475]: time="2024-07-02T00:21:53.682311687Z" level=info msg="TearDown network for sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" successfully" Jul 2 00:21:53.682499 containerd[1475]: time="2024-07-02T00:21:53.682338661Z" level=info msg="StopPodSandbox for \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" returns successfully" Jul 2 00:21:53.748070 kubelet[2562]: I0702 00:21:53.746494 2562 scope.go:117] "RemoveContainer" containerID="1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8" Jul 2 00:21:53.749174 containerd[1475]: time="2024-07-02T00:21:53.749124395Z" level=info msg="RemoveContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\"" Jul 2 00:21:53.761454 containerd[1475]: time="2024-07-02T00:21:53.761405154Z" level=info msg="RemoveContainer for \"1a4f1ff946577630c4b462dd6509e6b8bb148eb60e8cf65a0926d18600c3b4a8\" returns successfully" Jul 2 00:21:53.762028 kubelet[2562]: I0702 00:21:53.761856 2562 scope.go:117] "RemoveContainer" containerID="3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38" Jul 2 00:21:53.763628 containerd[1475]: time="2024-07-02T00:21:53.763591054Z" level=info msg="RemoveContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\"" Jul 2 00:21:53.766046 containerd[1475]: time="2024-07-02T00:21:53.766008595Z" level=info msg="RemoveContainer for \"3bae9120997ffdabcb7acc063aa7583dcc57e2e4982a4477f50c21c92afe8b38\" returns successfully" Jul 2 00:21:53.766272 kubelet[2562]: I0702 00:21:53.766198 2562 scope.go:117] "RemoveContainer" containerID="dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a" Jul 2 00:21:53.767623 containerd[1475]: time="2024-07-02T00:21:53.767506880Z" level=info msg="RemoveContainer for \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\"" Jul 2 00:21:53.769114 kubelet[2562]: I0702 00:21:53.769088 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkcg9\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-kube-api-access-lkcg9\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769274 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-run\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769305 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc024084-5098-453a-94cf-bfb0964f844e-cilium-config-path\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769332 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-hostproc\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769354 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-lib-modules\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769371 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/dd115817-4e8c-461e-8f88-d4b90cd86369-kube-api-access-s4plm\") pod \"dd115817-4e8c-461e-8f88-d4b90cd86369\" (UID: \"dd115817-4e8c-461e-8f88-d4b90cd86369\") " Jul 2 00:21:53.769479 kubelet[2562]: I0702 00:21:53.769385 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-etc-cni-netd\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769688 kubelet[2562]: I0702 00:21:53.769410 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-hubble-tls\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769688 kubelet[2562]: I0702 00:21:53.769431 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc024084-5098-453a-94cf-bfb0964f844e-clustermesh-secrets\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.769688 kubelet[2562]: I0702 00:21:53.769446 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-cgroup\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769463 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd115817-4e8c-461e-8f88-d4b90cd86369-cilium-config-path\") pod \"dd115817-4e8c-461e-8f88-d4b90cd86369\" (UID: \"dd115817-4e8c-461e-8f88-d4b90cd86369\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769882 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cni-path\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769905 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-xtables-lock\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769920 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-bpf-maps\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769961 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-net\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770100 kubelet[2562]: I0702 00:21:53.769978 2562 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-kernel\") pod \"cc024084-5098-453a-94cf-bfb0964f844e\" (UID: \"cc024084-5098-453a-94cf-bfb0964f844e\") " Jul 2 00:21:53.770312 kubelet[2562]: I0702 00:21:53.770063 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.770605 kubelet[2562]: I0702 00:21:53.770387 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.771680 containerd[1475]: time="2024-07-02T00:21:53.771515385Z" level=info msg="RemoveContainer for \"dc040dfcee8c15f3f89325d99f8d49adbb59d3a77295bf0c742a09772ff1912a\" returns successfully" Jul 2 00:21:53.773794 kubelet[2562]: I0702 00:21:53.773582 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.774176 kubelet[2562]: I0702 00:21:53.774005 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.775894 kubelet[2562]: I0702 00:21:53.775448 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-hostproc" (OuterVolumeSpecName: "hostproc") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.775894 kubelet[2562]: I0702 00:21:53.775489 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.776118 kubelet[2562]: I0702 00:21:53.776100 2562 scope.go:117] "RemoveContainer" containerID="6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4" Jul 2 00:21:53.777033 kubelet[2562]: I0702 00:21:53.777006 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cni-path" (OuterVolumeSpecName: "cni-path") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.777100 kubelet[2562]: I0702 00:21:53.777044 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.777100 kubelet[2562]: I0702 00:21:53.777059 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.777100 kubelet[2562]: I0702 00:21:53.777073 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:21:53.782408 containerd[1475]: time="2024-07-02T00:21:53.782075330Z" level=info msg="RemoveContainer for \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\"" Jul 2 00:21:53.785830 kubelet[2562]: I0702 00:21:53.785355 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc024084-5098-453a-94cf-bfb0964f844e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:21:53.788142 kubelet[2562]: I0702 00:21:53.788105 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd115817-4e8c-461e-8f88-d4b90cd86369-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd115817-4e8c-461e-8f88-d4b90cd86369" (UID: "dd115817-4e8c-461e-8f88-d4b90cd86369"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:21:53.788439 containerd[1475]: time="2024-07-02T00:21:53.788269394Z" level=info msg="RemoveContainer for \"6a5d871895e6ba7679dfe16af4acc2ffd760be2447dd38871f87969247c555a4\" returns successfully" Jul 2 00:21:53.788870 kubelet[2562]: I0702 00:21:53.788663 2562 scope.go:117] "RemoveContainer" containerID="c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27" Jul 2 00:21:53.791885 kubelet[2562]: I0702 00:21:53.791206 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd115817-4e8c-461e-8f88-d4b90cd86369-kube-api-access-s4plm" (OuterVolumeSpecName: "kube-api-access-s4plm") pod "dd115817-4e8c-461e-8f88-d4b90cd86369" (UID: "dd115817-4e8c-461e-8f88-d4b90cd86369"). InnerVolumeSpecName "kube-api-access-s4plm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:21:53.792092 containerd[1475]: time="2024-07-02T00:21:53.792012703Z" level=info msg="RemoveContainer for \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\"" Jul 2 00:21:53.794215 containerd[1475]: time="2024-07-02T00:21:53.794178913Z" level=info msg="RemoveContainer for \"c78e67195c30bac0e8537e9e4f8961d34c7f769ca58d96f47c32699e407c9c27\" returns successfully" Jul 2 00:21:53.794821 kubelet[2562]: I0702 00:21:53.794695 2562 scope.go:117] "RemoveContainer" containerID="e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a" Jul 2 00:21:53.796838 containerd[1475]: time="2024-07-02T00:21:53.796749954Z" level=info msg="RemoveContainer for \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\"" Jul 2 00:21:53.799731 containerd[1475]: time="2024-07-02T00:21:53.799692131Z" level=info msg="RemoveContainer for \"e6409d619399058b93c49562c17edda60ac685ad7cfd33e9a061fd9129d5ae4a\" returns successfully" Jul 2 00:21:53.800943 kubelet[2562]: I0702 00:21:53.800891 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:21:53.801975 containerd[1475]: time="2024-07-02T00:21:53.801936360Z" level=info msg="StopPodSandbox for \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\"" Jul 2 00:21:53.802042 containerd[1475]: time="2024-07-02T00:21:53.802030109Z" level=info msg="TearDown network for sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" successfully" Jul 2 00:21:53.802183 containerd[1475]: time="2024-07-02T00:21:53.802041948Z" level=info msg="StopPodSandbox for \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" returns successfully" Jul 2 00:21:53.802459 kubelet[2562]: I0702 00:21:53.802370 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc024084-5098-453a-94cf-bfb0964f844e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:21:53.802459 kubelet[2562]: I0702 00:21:53.802413 2562 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-kube-api-access-lkcg9" (OuterVolumeSpecName: "kube-api-access-lkcg9") pod "cc024084-5098-453a-94cf-bfb0964f844e" (UID: "cc024084-5098-453a-94cf-bfb0964f844e"). InnerVolumeSpecName "kube-api-access-lkcg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:21:53.802600 containerd[1475]: time="2024-07-02T00:21:53.802579541Z" level=info msg="RemovePodSandbox for \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\"" Jul 2 00:21:53.805730 containerd[1475]: time="2024-07-02T00:21:53.805692631Z" level=info msg="Forcibly stopping sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\"" Jul 2 00:21:53.808651 containerd[1475]: time="2024-07-02T00:21:53.805783463Z" level=info msg="TearDown network for sandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" successfully" Jul 2 00:21:53.811701 containerd[1475]: time="2024-07-02T00:21:53.811646810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:21:53.812196 containerd[1475]: time="2024-07-02T00:21:53.811723342Z" level=info msg="RemovePodSandbox \"93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7\" returns successfully" Jul 2 00:21:53.812338 containerd[1475]: time="2024-07-02T00:21:53.812303737Z" level=info msg="StopPodSandbox for \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\"" Jul 2 00:21:53.812999 containerd[1475]: time="2024-07-02T00:21:53.812387087Z" level=info msg="TearDown network for sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" successfully" Jul 2 00:21:53.812999 containerd[1475]: time="2024-07-02T00:21:53.812401220Z" level=info msg="StopPodSandbox for \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" returns successfully" Jul 2 00:21:53.813386 containerd[1475]: time="2024-07-02T00:21:53.813365050Z" level=info msg="RemovePodSandbox for \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\"" Jul 2 00:21:53.814687 containerd[1475]: time="2024-07-02T00:21:53.813495298Z" level=info msg="Forcibly stopping sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\"" Jul 2 00:21:53.814687 containerd[1475]: time="2024-07-02T00:21:53.813564602Z" level=info msg="TearDown network for sandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" successfully" Jul 2 00:21:53.816241 containerd[1475]: time="2024-07-02T00:21:53.816209556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:21:53.816376 containerd[1475]: time="2024-07-02T00:21:53.816360679Z" level=info msg="RemovePodSandbox \"b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667\" returns successfully" Jul 2 00:21:53.856395 kubelet[2562]: E0702 00:21:53.856345 2562 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:21:53.870830 kubelet[2562]: I0702 00:21:53.870752 2562 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-bpf-maps\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.870830 kubelet[2562]: I0702 00:21:53.870819 2562 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-kernel\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.870830 kubelet[2562]: I0702 00:21:53.870835 2562 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-host-proc-sys-net\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870853 2562 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-hostproc\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870867 2562 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lkcg9\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-kube-api-access-lkcg9\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870881 2562 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-run\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870895 2562 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc024084-5098-453a-94cf-bfb0964f844e-cilium-config-path\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870904 2562 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-lib-modules\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870914 2562 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s4plm\" (UniqueName: \"kubernetes.io/projected/dd115817-4e8c-461e-8f88-d4b90cd86369-kube-api-access-s4plm\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870923 2562 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-etc-cni-netd\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871068 kubelet[2562]: I0702 00:21:53.870935 2562 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc024084-5098-453a-94cf-bfb0964f844e-hubble-tls\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871261 kubelet[2562]: I0702 00:21:53.870943 2562 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd115817-4e8c-461e-8f88-d4b90cd86369-cilium-config-path\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871261 kubelet[2562]: I0702 00:21:53.870951 2562 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc024084-5098-453a-94cf-bfb0964f844e-clustermesh-secrets\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871261 kubelet[2562]: I0702 00:21:53.870962 2562 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cilium-cgroup\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871261 kubelet[2562]: I0702 00:21:53.870971 2562 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-xtables-lock\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:53.871261 kubelet[2562]: I0702 00:21:53.870978 2562 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc024084-5098-453a-94cf-bfb0964f844e-cni-path\") on node \"ci-3975.1.1-0-70f2b56eaa\" DevicePath \"\"" Jul 2 00:21:54.103033 systemd[1]: Removed slice kubepods-besteffort-poddd115817_4e8c_461e_8f88_d4b90cd86369.slice - libcontainer container kubepods-besteffort-poddd115817_4e8c_461e_8f88_d4b90cd86369.slice. Jul 2 00:21:54.111378 systemd[1]: Removed slice kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice - libcontainer container kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice. Jul 2 00:21:54.111997 systemd[1]: kubepods-burstable-podcc024084_5098_453a_94cf_bfb0964f844e.slice: Consumed 8.476s CPU time. Jul 2 00:21:54.431601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3736bc9f11a21bd8b1da13362bbecd8a1b5c1845ec621b71ac71c597bace667-rootfs.mount: Deactivated successfully. Jul 2 00:21:54.432055 systemd[1]: var-lib-kubelet-pods-dd115817\x2d4e8c\x2d461e\x2d8f88\x2dd4b90cd86369-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds4plm.mount: Deactivated successfully. Jul 2 00:21:54.432498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93dff57e11f9a39c8ac7b280ff05c610dfd2cbd7b449c85a4fa7aa37718b6af7-rootfs.mount: Deactivated successfully. Jul 2 00:21:54.432567 systemd[1]: var-lib-kubelet-pods-cc024084\x2d5098\x2d453a\x2d94cf\x2dbfb0964f844e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkcg9.mount: Deactivated successfully. Jul 2 00:21:54.432627 systemd[1]: var-lib-kubelet-pods-cc024084\x2d5098\x2d453a\x2d94cf\x2dbfb0964f844e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:21:54.432685 systemd[1]: var-lib-kubelet-pods-cc024084\x2d5098\x2d453a\x2d94cf\x2dbfb0964f844e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:21:55.351395 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:55.369092 systemd[1]: sshd@29-146.190.126.73:22-147.75.109.163:43930.service: Deactivated successfully. Jul 2 00:21:55.372220 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:21:55.374430 systemd-logind[1456]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:21:55.380443 systemd[1]: Started sshd@30-146.190.126.73:22-147.75.109.163:50588.service - OpenSSH per-connection server daemon (147.75.109.163:50588). Jul 2 00:21:55.382576 systemd-logind[1456]: Removed session 29. Jul 2 00:21:55.448483 sshd[4375]: Accepted publickey for core from 147.75.109.163 port 50588 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:55.450505 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:55.456253 systemd-logind[1456]: New session 30 of user core. Jul 2 00:21:55.466154 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:21:55.782996 kubelet[2562]: I0702 00:21:55.782918 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc024084-5098-453a-94cf-bfb0964f844e" path="/var/lib/kubelet/pods/cc024084-5098-453a-94cf-bfb0964f844e/volumes" Jul 2 00:21:55.784450 kubelet[2562]: I0702 00:21:55.784177 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd115817-4e8c-461e-8f88-d4b90cd86369" path="/var/lib/kubelet/pods/dd115817-4e8c-461e-8f88-d4b90cd86369/volumes" Jul 2 00:21:55.984622 sshd[4375]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:55.995247 systemd[1]: sshd@30-146.190.126.73:22-147.75.109.163:50588.service: Deactivated successfully. Jul 2 00:21:56.000843 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:21:56.002370 kubelet[2562]: I0702 00:21:56.002328 2562 topology_manager.go:215] "Topology Admit Handler" podUID="c54b4063-f9d2-4387-9105-1665adf1afc2" podNamespace="kube-system" podName="cilium-5g9sk" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002413 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="apply-sysctl-overwrites" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002428 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd115817-4e8c-461e-8f88-d4b90cd86369" containerName="cilium-operator" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002435 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="cilium-agent" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002442 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="mount-cgroup" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002448 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="mount-bpf-fs" Jul 2 00:21:56.002482 kubelet[2562]: E0702 00:21:56.002453 2562 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="clean-cilium-state" Jul 2 00:21:56.002482 kubelet[2562]: I0702 00:21:56.002482 2562 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc024084-5098-453a-94cf-bfb0964f844e" containerName="cilium-agent" Jul 2 00:21:56.002668 kubelet[2562]: I0702 00:21:56.002489 2562 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd115817-4e8c-461e-8f88-d4b90cd86369" containerName="cilium-operator" Jul 2 00:21:56.007347 systemd-logind[1456]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:21:56.016186 systemd[1]: Started sshd@31-146.190.126.73:22-147.75.109.163:50598.service - OpenSSH per-connection server daemon (147.75.109.163:50598). Jul 2 00:21:56.017068 systemd-logind[1456]: Removed session 30. Jul 2 00:21:56.037920 systemd[1]: Created slice kubepods-burstable-podc54b4063_f9d2_4387_9105_1665adf1afc2.slice - libcontainer container kubepods-burstable-podc54b4063_f9d2_4387_9105_1665adf1afc2.slice. Jul 2 00:21:56.085572 kubelet[2562]: I0702 00:21:56.085528 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-cilium-cgroup\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085572 kubelet[2562]: I0702 00:21:56.085575 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-cni-path\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085604 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-host-proc-sys-kernel\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085628 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-etc-cni-netd\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085652 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c54b4063-f9d2-4387-9105-1665adf1afc2-clustermesh-secrets\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085673 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c54b4063-f9d2-4387-9105-1665adf1afc2-cilium-config-path\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085697 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-lib-modules\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.085764 kubelet[2562]: I0702 00:21:56.085719 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c54b4063-f9d2-4387-9105-1665adf1afc2-hubble-tls\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085740 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-bpf-maps\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085766 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-hostproc\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085791 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-xtables-lock\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085832 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c54b4063-f9d2-4387-9105-1665adf1afc2-cilium-ipsec-secrets\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085855 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-host-proc-sys-net\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086049 kubelet[2562]: I0702 00:21:56.085877 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c54b4063-f9d2-4387-9105-1665adf1afc2-cilium-run\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086195 kubelet[2562]: I0702 00:21:56.085904 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zlj\" (UniqueName: \"kubernetes.io/projected/c54b4063-f9d2-4387-9105-1665adf1afc2-kube-api-access-l8zlj\") pod \"cilium-5g9sk\" (UID: \"c54b4063-f9d2-4387-9105-1665adf1afc2\") " pod="kube-system/cilium-5g9sk" Jul 2 00:21:56.086796 sshd[4387]: Accepted publickey for core from 147.75.109.163 port 50598 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:56.092102 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:56.103084 systemd-logind[1456]: New session 31 of user core. Jul 2 00:21:56.109957 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 00:21:56.175745 sshd[4387]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:56.185235 systemd[1]: sshd@31-146.190.126.73:22-147.75.109.163:50598.service: Deactivated successfully. Jul 2 00:21:56.188682 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 00:21:56.196143 systemd-logind[1456]: Session 31 logged out. Waiting for processes to exit. Jul 2 00:21:56.204100 systemd[1]: Started sshd@32-146.190.126.73:22-147.75.109.163:50606.service - OpenSSH per-connection server daemon (147.75.109.163:50606). Jul 2 00:21:56.229710 systemd-logind[1456]: Removed session 31. Jul 2 00:21:56.261106 sshd[4397]: Accepted publickey for core from 147.75.109.163 port 50606 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:56.262898 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:56.271172 systemd-logind[1456]: New session 32 of user core. Jul 2 00:21:56.279125 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 2 00:21:56.344689 kubelet[2562]: E0702 00:21:56.344510 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:56.346241 containerd[1475]: time="2024-07-02T00:21:56.346186206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g9sk,Uid:c54b4063-f9d2-4387-9105-1665adf1afc2,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:56.383968 containerd[1475]: time="2024-07-02T00:21:56.381458599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:56.383968 containerd[1475]: time="2024-07-02T00:21:56.382236091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:56.383968 containerd[1475]: time="2024-07-02T00:21:56.382255747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:56.383968 containerd[1475]: time="2024-07-02T00:21:56.382265684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:56.397481 kubelet[2562]: I0702 00:21:56.397088 2562 setters.go:580] "Node became not ready" node="ci-3975.1.1-0-70f2b56eaa" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:21:56Z","lastTransitionTime":"2024-07-02T00:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:21:56.414029 systemd[1]: Started cri-containerd-fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5.scope - libcontainer container fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5. Jul 2 00:21:56.470041 containerd[1475]: time="2024-07-02T00:21:56.469976398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g9sk,Uid:c54b4063-f9d2-4387-9105-1665adf1afc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\"" Jul 2 00:21:56.473034 kubelet[2562]: E0702 00:21:56.472902 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:56.476696 containerd[1475]: time="2024-07-02T00:21:56.476168220Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:21:56.492393 containerd[1475]: time="2024-07-02T00:21:56.492259531Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29\"" Jul 2 00:21:56.497100 containerd[1475]: time="2024-07-02T00:21:56.497052314Z" level=info msg="StartContainer for \"ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29\"" Jul 2 00:21:56.530134 systemd[1]: Started cri-containerd-ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29.scope - libcontainer container ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29. Jul 2 00:21:56.573975 containerd[1475]: time="2024-07-02T00:21:56.573752069Z" level=info msg="StartContainer for \"ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29\" returns successfully" Jul 2 00:21:56.594939 systemd[1]: cri-containerd-ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29.scope: Deactivated successfully. Jul 2 00:21:56.631368 containerd[1475]: time="2024-07-02T00:21:56.631166681Z" level=info msg="shim disconnected" id=ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29 namespace=k8s.io Jul 2 00:21:56.631368 containerd[1475]: time="2024-07-02T00:21:56.631270444Z" level=warning msg="cleaning up after shim disconnected" id=ea9bbc53ee9ddbdb0272a4c799691920c8eb6f686889264a35bbba6cb31aeb29 namespace=k8s.io Jul 2 00:21:56.631368 containerd[1475]: time="2024-07-02T00:21:56.631284157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:57.102620 kubelet[2562]: E0702 00:21:57.102585 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:57.106023 containerd[1475]: time="2024-07-02T00:21:57.105769912Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:21:57.162215 containerd[1475]: time="2024-07-02T00:21:57.160251949Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d\"" Jul 2 00:21:57.163098 containerd[1475]: time="2024-07-02T00:21:57.163053525Z" level=info msg="StartContainer for \"43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d\"" Jul 2 00:21:57.228364 systemd[1]: run-containerd-runc-k8s.io-43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d-runc.UKkjjZ.mount: Deactivated successfully. Jul 2 00:21:57.242039 systemd[1]: Started cri-containerd-43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d.scope - libcontainer container 43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d. Jul 2 00:21:57.282370 containerd[1475]: time="2024-07-02T00:21:57.282295879Z" level=info msg="StartContainer for \"43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d\" returns successfully" Jul 2 00:21:57.293605 systemd[1]: cri-containerd-43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d.scope: Deactivated successfully. Jul 2 00:21:57.316043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d-rootfs.mount: Deactivated successfully. Jul 2 00:21:57.320231 containerd[1475]: time="2024-07-02T00:21:57.320021522Z" level=info msg="shim disconnected" id=43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d namespace=k8s.io Jul 2 00:21:57.320231 containerd[1475]: time="2024-07-02T00:21:57.320084647Z" level=warning msg="cleaning up after shim disconnected" id=43b1ee5818249d9c73aedc202701e331767e9b310db803de8ddbd226ed9d6c4d namespace=k8s.io Jul 2 00:21:57.320231 containerd[1475]: time="2024-07-02T00:21:57.320094106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:58.107769 kubelet[2562]: E0702 00:21:58.107372 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:58.111220 containerd[1475]: time="2024-07-02T00:21:58.111180249Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:21:58.130847 containerd[1475]: time="2024-07-02T00:21:58.130691592Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318\"" Jul 2 00:21:58.134873 containerd[1475]: time="2024-07-02T00:21:58.131594435Z" level=info msg="StartContainer for \"e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318\"" Jul 2 00:21:58.173102 systemd[1]: Started cri-containerd-e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318.scope - libcontainer container e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318. Jul 2 00:21:58.213646 containerd[1475]: time="2024-07-02T00:21:58.213490305Z" level=info msg="StartContainer for \"e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318\" returns successfully" Jul 2 00:21:58.220604 systemd[1]: cri-containerd-e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318.scope: Deactivated successfully. Jul 2 00:21:58.248756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318-rootfs.mount: Deactivated successfully. Jul 2 00:21:58.253450 containerd[1475]: time="2024-07-02T00:21:58.253371908Z" level=info msg="shim disconnected" id=e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318 namespace=k8s.io Jul 2 00:21:58.253450 containerd[1475]: time="2024-07-02T00:21:58.253433640Z" level=warning msg="cleaning up after shim disconnected" id=e62828cd6e5efec44679a87d20fee909a91e16a5757dfe3b69d399f7b9d0a318 namespace=k8s.io Jul 2 00:21:58.253450 containerd[1475]: time="2024-07-02T00:21:58.253449031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:58.858469 kubelet[2562]: E0702 00:21:58.858362 2562 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:21:59.113041 kubelet[2562]: E0702 00:21:59.112479 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:21:59.116505 containerd[1475]: time="2024-07-02T00:21:59.116462601Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:21:59.132074 containerd[1475]: time="2024-07-02T00:21:59.132023824Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195\"" Jul 2 00:21:59.134641 containerd[1475]: time="2024-07-02T00:21:59.134565118Z" level=info msg="StartContainer for \"37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195\"" Jul 2 00:21:59.177632 systemd[1]: Started cri-containerd-37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195.scope - libcontainer container 37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195. Jul 2 00:21:59.230079 containerd[1475]: time="2024-07-02T00:21:59.230020814Z" level=info msg="StartContainer for \"37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195\" returns successfully" Jul 2 00:21:59.230679 systemd[1]: cri-containerd-37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195.scope: Deactivated successfully. Jul 2 00:21:59.255124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195-rootfs.mount: Deactivated successfully. Jul 2 00:21:59.257722 containerd[1475]: time="2024-07-02T00:21:59.257621350Z" level=info msg="shim disconnected" id=37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195 namespace=k8s.io Jul 2 00:21:59.257722 containerd[1475]: time="2024-07-02T00:21:59.257698854Z" level=warning msg="cleaning up after shim disconnected" id=37723f31e1ca66eb300ffd5eb29a333819b511ee69d1f9f7772da2e1ec93f195 namespace=k8s.io Jul 2 00:21:59.258165 containerd[1475]: time="2024-07-02T00:21:59.257858332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:00.120318 kubelet[2562]: E0702 00:22:00.120150 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:22:00.127155 containerd[1475]: time="2024-07-02T00:22:00.127099205Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:22:00.154278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461834414.mount: Deactivated successfully. Jul 2 00:22:00.158705 containerd[1475]: time="2024-07-02T00:22:00.158607523Z" level=info msg="CreateContainer within sandbox \"fe143b01a451c52152917779acb9a86a8224463371494f7787b6c9b993b549e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf\"" Jul 2 00:22:00.160425 containerd[1475]: time="2024-07-02T00:22:00.160233063Z" level=info msg="StartContainer for \"297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf\"" Jul 2 00:22:00.201118 systemd[1]: Started cri-containerd-297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf.scope - libcontainer container 297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf. Jul 2 00:22:00.247108 containerd[1475]: time="2024-07-02T00:22:00.246721988Z" level=info msg="StartContainer for \"297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf\" returns successfully" Jul 2 00:22:00.282217 systemd[1]: run-containerd-runc-k8s.io-297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf-runc.Axt2vh.mount: Deactivated successfully. Jul 2 00:22:00.775075 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:22:01.128604 kubelet[2562]: E0702 00:22:01.128365 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:22:02.347626 kubelet[2562]: E0702 00:22:02.347347 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:22:02.841633 systemd[1]: run-containerd-runc-k8s.io-297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf-runc.S8AkHP.mount: Deactivated successfully. Jul 2 00:22:04.075049 systemd-networkd[1368]: lxc_health: Link UP Jul 2 00:22:04.092456 systemd-networkd[1368]: lxc_health: Gained carrier Jul 2 00:22:04.348126 kubelet[2562]: E0702 00:22:04.347986 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:22:04.372793 kubelet[2562]: I0702 00:22:04.372717 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5g9sk" podStartSLOduration=9.372690579 podStartE2EDuration="9.372690579s" podCreationTimestamp="2024-07-02 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:01.150722907 +0000 UTC m=+127.516050163" watchObservedRunningTime="2024-07-02 00:22:04.372690579 +0000 UTC m=+130.738017838" Jul 2 00:22:05.143530 kubelet[2562]: E0702 00:22:05.143332 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 2 00:22:05.146732 systemd[1]: run-containerd-runc-k8s.io-297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf-runc.fkYeH1.mount: Deactivated successfully. Jul 2 00:22:05.210059 kubelet[2562]: E0702 00:22:05.209971 2562 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33152->127.0.0.1:42943: write tcp 127.0.0.1:33152->127.0.0.1:42943: write: broken pipe Jul 2 00:22:06.110053 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 2 00:22:09.506571 systemd[1]: run-containerd-runc-k8s.io-297c39e209433f7dd0db56778b17ec6bc065fee60fa93ce4a292465461df82cf-runc.mLPItV.mount: Deactivated successfully. Jul 2 00:22:09.588035 sshd[4397]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:09.592436 systemd[1]: sshd@32-146.190.126.73:22-147.75.109.163:50606.service: Deactivated successfully. Jul 2 00:22:09.595156 systemd[1]: session-32.scope: Deactivated successfully. Jul 2 00:22:09.597970 systemd-logind[1456]: Session 32 logged out. Waiting for processes to exit. Jul 2 00:22:09.599595 systemd-logind[1456]: Removed session 32.