Jun 26 07:17:47.070051 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 26 07:17:47.070093 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:17:47.070108 kernel: BIOS-provided physical RAM map: Jun 26 07:17:47.070120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 26 07:17:47.070129 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 26 07:17:47.070139 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 26 07:17:47.070150 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jun 26 07:17:47.070162 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jun 26 07:17:47.070173 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 26 07:17:47.070187 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 26 07:17:47.070198 kernel: NX (Execute Disable) protection: active Jun 26 07:17:47.070208 kernel: APIC: Static calls initialized Jun 26 07:17:47.070215 kernel: SMBIOS 2.8 present. Jun 26 07:17:47.070222 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 26 07:17:47.070231 kernel: Hypervisor detected: KVM Jun 26 07:17:47.070247 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 26 07:17:47.070254 kernel: kvm-clock: using sched offset of 3286255471 cycles Jun 26 07:17:47.070266 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 26 07:17:47.070274 kernel: tsc: Detected 2494.140 MHz processor Jun 26 07:17:47.070283 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 26 07:17:47.070293 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 26 07:17:47.070302 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jun 26 07:17:47.070310 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 26 07:17:47.070317 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 26 07:17:47.070328 kernel: ACPI: Early table checksum verification disabled Jun 26 07:17:47.070335 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jun 26 07:17:47.070343 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070351 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070359 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070366 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 26 07:17:47.070374 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070382 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070390 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070400 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:17:47.070407 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 26 07:17:47.070415 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 26 07:17:47.070423 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 26 07:17:47.070430 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 26 07:17:47.070438 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 26 07:17:47.070446 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 26 07:17:47.070460 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 26 07:17:47.070468 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 26 07:17:47.070476 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 26 07:17:47.070485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 26 07:17:47.070493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 26 07:17:47.070501 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jun 26 07:17:47.070510 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jun 26 07:17:47.070520 kernel: Zone ranges: Jun 26 07:17:47.070528 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 26 07:17:47.070537 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jun 26 07:17:47.070545 kernel: Normal empty Jun 26 07:17:47.070553 kernel: Movable zone start for each node Jun 26 07:17:47.070561 kernel: Early memory node ranges Jun 26 07:17:47.070569 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 26 07:17:47.070577 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jun 26 07:17:47.070585 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jun 26 07:17:47.070596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 26 07:17:47.070604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 26 07:17:47.070612 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jun 26 07:17:47.070620 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 26 07:17:47.070629 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 26 07:17:47.070637 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 26 07:17:47.070645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 26 07:17:47.070653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 26 07:17:47.070661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 26 07:17:47.070672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 26 07:17:47.070683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 26 07:17:47.070691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 26 07:17:47.070699 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 26 07:17:47.070707 kernel: TSC deadline timer available Jun 26 07:17:47.070715 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 26 07:17:47.070724 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 26 07:17:47.070732 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 26 07:17:47.070740 kernel: Booting paravirtualized kernel on KVM Jun 26 07:17:47.070752 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 26 07:17:47.070764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 26 07:17:47.070772 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 26 07:17:47.070781 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 26 07:17:47.070791 kernel: pcpu-alloc: [0] 0 1 Jun 26 07:17:47.070839 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 26 07:17:47.070849 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:17:47.070858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 26 07:17:47.070870 kernel: random: crng init done Jun 26 07:17:47.070878 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 26 07:17:47.070886 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 26 07:17:47.070895 kernel: Fallback order for Node 0: 0 Jun 26 07:17:47.070903 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jun 26 07:17:47.070911 kernel: Policy zone: DMA32 Jun 26 07:17:47.070919 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 26 07:17:47.070928 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 26 07:17:47.070936 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 26 07:17:47.070947 kernel: Kernel/User page tables isolation: enabled Jun 26 07:17:47.070956 kernel: ftrace: allocating 37650 entries in 148 pages Jun 26 07:17:47.070964 kernel: ftrace: allocated 148 pages with 3 groups Jun 26 07:17:47.070972 kernel: Dynamic Preempt: voluntary Jun 26 07:17:47.070984 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 26 07:17:47.070997 kernel: rcu: RCU event tracing is enabled. Jun 26 07:17:47.071008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 26 07:17:47.071020 kernel: Trampoline variant of Tasks RCU enabled. Jun 26 07:17:47.071033 kernel: Rude variant of Tasks RCU enabled. Jun 26 07:17:47.071042 kernel: Tracing variant of Tasks RCU enabled. Jun 26 07:17:47.071054 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 26 07:17:47.071062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 26 07:17:47.071070 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 26 07:17:47.071079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 26 07:17:47.071087 kernel: Console: colour VGA+ 80x25 Jun 26 07:17:47.071095 kernel: printk: console [tty0] enabled Jun 26 07:17:47.071105 kernel: printk: console [ttyS0] enabled Jun 26 07:17:47.071117 kernel: ACPI: Core revision 20230628 Jun 26 07:17:47.071128 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 26 07:17:47.071142 kernel: APIC: Switch to symmetric I/O mode setup Jun 26 07:17:47.071154 kernel: x2apic enabled Jun 26 07:17:47.071167 kernel: APIC: Switched APIC routing to: physical x2apic Jun 26 07:17:47.071184 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 26 07:17:47.071196 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jun 26 07:17:47.071209 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jun 26 07:17:47.071220 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 26 07:17:47.071233 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 26 07:17:47.071259 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 26 07:17:47.071273 kernel: Spectre V2 : Mitigation: Retpolines Jun 26 07:17:47.071282 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 26 07:17:47.071294 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 26 07:17:47.071302 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 26 07:17:47.071311 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 26 07:17:47.071320 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 26 07:17:47.071329 kernel: MDS: Mitigation: Clear CPU buffers Jun 26 07:17:47.071338 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 26 07:17:47.071353 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 26 07:17:47.071366 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 26 07:17:47.071375 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 26 07:17:47.071383 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 26 07:17:47.071392 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 26 07:17:47.071401 kernel: Freeing SMP alternatives memory: 32K Jun 26 07:17:47.071413 kernel: pid_max: default: 32768 minimum: 301 Jun 26 07:17:47.071422 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 26 07:17:47.071434 kernel: SELinux: Initializing. Jun 26 07:17:47.071443 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:17:47.071451 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:17:47.071460 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 26 07:17:47.071469 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:17:47.071478 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:17:47.071487 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:17:47.071496 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 26 07:17:47.071505 kernel: signal: max sigframe size: 1776 Jun 26 07:17:47.071516 kernel: rcu: Hierarchical SRCU implementation. Jun 26 07:17:47.071525 kernel: rcu: Max phase no-delay instances is 400. Jun 26 07:17:47.071534 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 26 07:17:47.071543 kernel: smp: Bringing up secondary CPUs ... Jun 26 07:17:47.071553 kernel: smpboot: x86: Booting SMP configuration: Jun 26 07:17:47.071567 kernel: .... node #0, CPUs: #1 Jun 26 07:17:47.071579 kernel: smp: Brought up 1 node, 2 CPUs Jun 26 07:17:47.071592 kernel: smpboot: Max logical packages: 1 Jun 26 07:17:47.071603 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jun 26 07:17:47.071616 kernel: devtmpfs: initialized Jun 26 07:17:47.071625 kernel: x86/mm: Memory block size: 128MB Jun 26 07:17:47.071634 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 26 07:17:47.071643 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 26 07:17:47.071652 kernel: pinctrl core: initialized pinctrl subsystem Jun 26 07:17:47.071661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 26 07:17:47.071669 kernel: audit: initializing netlink subsys (disabled) Jun 26 07:17:47.071681 kernel: audit: type=2000 audit(1719386265.763:1): state=initialized audit_enabled=0 res=1 Jun 26 07:17:47.071690 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 26 07:17:47.071702 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 26 07:17:47.071711 kernel: cpuidle: using governor menu Jun 26 07:17:47.071722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 26 07:17:47.071735 kernel: dca service started, version 1.12.1 Jun 26 07:17:47.071748 kernel: PCI: Using configuration type 1 for base access Jun 26 07:17:47.071758 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 26 07:17:47.071766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 26 07:17:47.071775 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 26 07:17:47.071784 kernel: ACPI: Added _OSI(Module Device) Jun 26 07:17:47.071840 kernel: ACPI: Added _OSI(Processor Device) Jun 26 07:17:47.071851 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 26 07:17:47.071859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 26 07:17:47.071868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 26 07:17:47.071877 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 26 07:17:47.071886 kernel: ACPI: Interpreter enabled Jun 26 07:17:47.071894 kernel: ACPI: PM: (supports S0 S5) Jun 26 07:17:47.071903 kernel: ACPI: Using IOAPIC for interrupt routing Jun 26 07:17:47.071912 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 26 07:17:47.071932 kernel: PCI: Using E820 reservations for host bridge windows Jun 26 07:17:47.071946 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 26 07:17:47.071959 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 26 07:17:47.072254 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 26 07:17:47.072380 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 26 07:17:47.072477 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 26 07:17:47.072491 kernel: acpiphp: Slot [3] registered Jun 26 07:17:47.072506 kernel: acpiphp: Slot [4] registered Jun 26 07:17:47.072515 kernel: acpiphp: Slot [5] registered Jun 26 07:17:47.072529 kernel: acpiphp: Slot [6] registered Jun 26 07:17:47.072538 kernel: acpiphp: Slot [7] registered Jun 26 07:17:47.072547 kernel: acpiphp: Slot [8] registered Jun 26 07:17:47.072556 kernel: acpiphp: Slot [9] registered Jun 26 07:17:47.072565 kernel: acpiphp: Slot [10] registered Jun 26 07:17:47.072574 kernel: acpiphp: Slot [11] registered Jun 26 07:17:47.072583 kernel: acpiphp: Slot [12] registered Jun 26 07:17:47.072591 kernel: acpiphp: Slot [13] registered Jun 26 07:17:47.072604 kernel: acpiphp: Slot [14] registered Jun 26 07:17:47.072613 kernel: acpiphp: Slot [15] registered Jun 26 07:17:47.072622 kernel: acpiphp: Slot [16] registered Jun 26 07:17:47.072631 kernel: acpiphp: Slot [17] registered Jun 26 07:17:47.072639 kernel: acpiphp: Slot [18] registered Jun 26 07:17:47.072648 kernel: acpiphp: Slot [19] registered Jun 26 07:17:47.072657 kernel: acpiphp: Slot [20] registered Jun 26 07:17:47.072666 kernel: acpiphp: Slot [21] registered Jun 26 07:17:47.072682 kernel: acpiphp: Slot [22] registered Jun 26 07:17:47.072697 kernel: acpiphp: Slot [23] registered Jun 26 07:17:47.072706 kernel: acpiphp: Slot [24] registered Jun 26 07:17:47.072715 kernel: acpiphp: Slot [25] registered Jun 26 07:17:47.072724 kernel: acpiphp: Slot [26] registered Jun 26 07:17:47.072733 kernel: acpiphp: Slot [27] registered Jun 26 07:17:47.072742 kernel: acpiphp: Slot [28] registered Jun 26 07:17:47.072750 kernel: acpiphp: Slot [29] registered Jun 26 07:17:47.072759 kernel: acpiphp: Slot [30] registered Jun 26 07:17:47.072768 kernel: acpiphp: Slot [31] registered Jun 26 07:17:47.072780 kernel: PCI host bridge to bus 0000:00 Jun 26 07:17:47.072906 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 26 07:17:47.072996 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 26 07:17:47.073082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 26 07:17:47.073210 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 26 07:17:47.073297 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 26 07:17:47.073434 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 26 07:17:47.073633 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 26 07:17:47.073792 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 26 07:17:47.074572 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 26 07:17:47.074692 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 26 07:17:47.074880 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 26 07:17:47.075038 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 26 07:17:47.075179 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 26 07:17:47.075347 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 26 07:17:47.075548 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 26 07:17:47.075652 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 26 07:17:47.075751 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 26 07:17:47.075856 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 26 07:17:47.075957 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 26 07:17:47.076105 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 26 07:17:47.076201 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 26 07:17:47.076295 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 26 07:17:47.076388 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 26 07:17:47.076480 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 26 07:17:47.076577 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 26 07:17:47.076712 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:17:47.076908 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 26 07:17:47.077032 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 26 07:17:47.077154 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 26 07:17:47.077305 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:17:47.077476 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 26 07:17:47.077636 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 26 07:17:47.077775 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 26 07:17:47.078010 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 26 07:17:47.078113 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 26 07:17:47.078208 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 26 07:17:47.078300 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 26 07:17:47.078422 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:17:47.078533 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 26 07:17:47.078665 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 26 07:17:47.078764 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 26 07:17:47.078969 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:17:47.079127 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 26 07:17:47.079283 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 26 07:17:47.079441 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 26 07:17:47.079630 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 26 07:17:47.079756 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 26 07:17:47.080622 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 26 07:17:47.080645 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 26 07:17:47.080655 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 26 07:17:47.080665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 26 07:17:47.080674 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 26 07:17:47.080683 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 26 07:17:47.080692 kernel: iommu: Default domain type: Translated Jun 26 07:17:47.080708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 26 07:17:47.080717 kernel: PCI: Using ACPI for IRQ routing Jun 26 07:17:47.080726 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 26 07:17:47.080735 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 26 07:17:47.080744 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jun 26 07:17:47.080883 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 26 07:17:47.081016 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 26 07:17:47.081152 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 26 07:17:47.081172 kernel: vgaarb: loaded Jun 26 07:17:47.081181 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 26 07:17:47.081190 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 26 07:17:47.081199 kernel: clocksource: Switched to clocksource kvm-clock Jun 26 07:17:47.081208 kernel: VFS: Disk quotas dquot_6.6.0 Jun 26 07:17:47.081218 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 26 07:17:47.081227 kernel: pnp: PnP ACPI init Jun 26 07:17:47.081250 kernel: pnp: PnP ACPI: found 4 devices Jun 26 07:17:47.081263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 26 07:17:47.081281 kernel: NET: Registered PF_INET protocol family Jun 26 07:17:47.081294 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 26 07:17:47.081307 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 26 07:17:47.081318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 26 07:17:47.081332 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 26 07:17:47.081344 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 26 07:17:47.081356 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 26 07:17:47.081385 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:17:47.081399 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:17:47.081417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 26 07:17:47.081430 kernel: NET: Registered PF_XDP protocol family Jun 26 07:17:47.081590 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 26 07:17:47.081716 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 26 07:17:47.081969 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 26 07:17:47.082104 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 26 07:17:47.082201 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 26 07:17:47.082309 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 26 07:17:47.082435 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 26 07:17:47.082449 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 26 07:17:47.082555 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30845 usecs Jun 26 07:17:47.082568 kernel: PCI: CLS 0 bytes, default 64 Jun 26 07:17:47.082578 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 26 07:17:47.082587 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jun 26 07:17:47.082596 kernel: Initialise system trusted keyrings Jun 26 07:17:47.082606 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 26 07:17:47.082618 kernel: Key type asymmetric registered Jun 26 07:17:47.082627 kernel: Asymmetric key parser 'x509' registered Jun 26 07:17:47.082636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 26 07:17:47.082645 kernel: io scheduler mq-deadline registered Jun 26 07:17:47.082655 kernel: io scheduler kyber registered Jun 26 07:17:47.082664 kernel: io scheduler bfq registered Jun 26 07:17:47.082673 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 26 07:17:47.082682 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 26 07:17:47.082691 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 26 07:17:47.082700 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 26 07:17:47.082715 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 26 07:17:47.082729 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 26 07:17:47.082739 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 26 07:17:47.082748 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 26 07:17:47.082757 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 26 07:17:47.082766 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 26 07:17:47.082979 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 26 07:17:47.083117 kernel: rtc_cmos 00:03: registered as rtc0 Jun 26 07:17:47.083222 kernel: rtc_cmos 00:03: setting system clock to 2024-06-26T07:17:46 UTC (1719386266) Jun 26 07:17:47.083331 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 26 07:17:47.083344 kernel: intel_pstate: CPU model not supported Jun 26 07:17:47.083353 kernel: NET: Registered PF_INET6 protocol family Jun 26 07:17:47.083362 kernel: Segment Routing with IPv6 Jun 26 07:17:47.083371 kernel: In-situ OAM (IOAM) with IPv6 Jun 26 07:17:47.083381 kernel: NET: Registered PF_PACKET protocol family Jun 26 07:17:47.083390 kernel: Key type dns_resolver registered Jun 26 07:17:47.083403 kernel: IPI shorthand broadcast: enabled Jun 26 07:17:47.083412 kernel: sched_clock: Marking stable (1195042983, 119718045)->(1344693933, -29932905) Jun 26 07:17:47.083421 kernel: registered taskstats version 1 Jun 26 07:17:47.083433 kernel: Loading compiled-in X.509 certificates Jun 26 07:17:47.083444 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 26 07:17:47.083453 kernel: Key type .fscrypt registered Jun 26 07:17:47.083462 kernel: Key type fscrypt-provisioning registered Jun 26 07:17:47.083471 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 26 07:17:47.083482 kernel: ima: Allocated hash algorithm: sha1 Jun 26 07:17:47.083499 kernel: ima: No architecture policies found Jun 26 07:17:47.083512 kernel: clk: Disabling unused clocks Jun 26 07:17:47.083521 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 26 07:17:47.083530 kernel: Write protecting the kernel read-only data: 36864k Jun 26 07:17:47.083544 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 26 07:17:47.083579 kernel: Run /init as init process Jun 26 07:17:47.083591 kernel: with arguments: Jun 26 07:17:47.083601 kernel: /init Jun 26 07:17:47.083610 kernel: with environment: Jun 26 07:17:47.083622 kernel: HOME=/ Jun 26 07:17:47.083631 kernel: TERM=linux Jun 26 07:17:47.083640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 26 07:17:47.083659 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:17:47.083671 systemd[1]: Detected virtualization kvm. Jun 26 07:17:47.083684 systemd[1]: Detected architecture x86-64. Jun 26 07:17:47.083694 systemd[1]: Running in initrd. Jun 26 07:17:47.083703 systemd[1]: No hostname configured, using default hostname. Jun 26 07:17:47.083716 systemd[1]: Hostname set to . Jun 26 07:17:47.083730 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:17:47.083740 systemd[1]: Queued start job for default target initrd.target. Jun 26 07:17:47.083749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:17:47.083760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:17:47.083774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 26 07:17:47.083789 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:17:47.083828 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 26 07:17:47.083849 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 26 07:17:47.083860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 26 07:17:47.083870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 26 07:17:47.083880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:17:47.083890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:17:47.083899 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:17:47.083914 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:17:47.083927 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:17:47.083937 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:17:47.083950 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:17:47.083960 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:17:47.083971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:17:47.083983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:17:47.083993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:17:47.084003 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:17:47.084012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:17:47.084022 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:17:47.084032 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 26 07:17:47.084044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:17:47.084059 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 26 07:17:47.084078 systemd[1]: Starting systemd-fsck-usr.service... Jun 26 07:17:47.084092 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:17:47.084104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:17:47.084114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:47.084124 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 26 07:17:47.084134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:17:47.084178 systemd-journald[184]: Collecting audit messages is disabled. Jun 26 07:17:47.084207 systemd[1]: Finished systemd-fsck-usr.service. Jun 26 07:17:47.084219 systemd-journald[184]: Journal started Jun 26 07:17:47.084243 systemd-journald[184]: Runtime Journal (/run/log/journal/2137bea3331c4de3ba7536284455e644) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:17:47.077881 systemd-modules-load[185]: Inserted module 'overlay' Jun 26 07:17:47.091828 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:17:47.097836 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:17:47.111722 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:17:47.142784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 26 07:17:47.142834 kernel: Bridge firewalling registered Jun 26 07:17:47.132076 systemd-modules-load[185]: Inserted module 'br_netfilter' Jun 26 07:17:47.143604 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:17:47.144290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:47.154963 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:17:47.164107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:17:47.167249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:17:47.177075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:17:47.180420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:17:47.192858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:17:47.203090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:17:47.204077 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:17:47.205231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:17:47.209447 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 26 07:17:47.238842 dracut-cmdline[219]: dracut-dracut-053 Jun 26 07:17:47.241058 systemd-resolved[216]: Positive Trust Anchors: Jun 26 07:17:47.241089 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:17:47.241157 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:17:47.245135 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:17:47.247386 systemd-resolved[216]: Defaulting to hostname 'linux'. Jun 26 07:17:47.249639 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:17:47.251070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:17:47.363840 kernel: SCSI subsystem initialized Jun 26 07:17:47.376860 kernel: Loading iSCSI transport class v2.0-870. Jun 26 07:17:47.391844 kernel: iscsi: registered transport (tcp) Jun 26 07:17:47.423876 kernel: iscsi: registered transport (qla4xxx) Jun 26 07:17:47.423970 kernel: QLogic iSCSI HBA Driver Jun 26 07:17:47.485176 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 26 07:17:47.497155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 26 07:17:47.533541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 26 07:17:47.533623 kernel: device-mapper: uevent: version 1.0.3 Jun 26 07:17:47.534568 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 26 07:17:47.584866 kernel: raid6: avx2x4 gen() 15788 MB/s Jun 26 07:17:47.601866 kernel: raid6: avx2x2 gen() 16237 MB/s Jun 26 07:17:47.619014 kernel: raid6: avx2x1 gen() 11818 MB/s Jun 26 07:17:47.619107 kernel: raid6: using algorithm avx2x2 gen() 16237 MB/s Jun 26 07:17:47.636878 kernel: raid6: .... xor() 18344 MB/s, rmw enabled Jun 26 07:17:47.636964 kernel: raid6: using avx2x2 recovery algorithm Jun 26 07:17:47.667853 kernel: xor: automatically using best checksumming function avx Jun 26 07:17:47.887861 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 26 07:17:47.903178 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:17:47.915217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:17:47.931042 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jun 26 07:17:47.937697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:17:47.945233 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 26 07:17:47.964456 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jun 26 07:17:48.005134 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:17:48.010028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:17:48.091503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:17:48.100386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 26 07:17:48.136209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 26 07:17:48.139226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:17:48.139788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:17:48.140298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:17:48.150025 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 26 07:17:48.166211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:17:48.195835 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 26 07:17:48.258555 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 26 07:17:48.258776 kernel: cryptd: max_cpu_qlen set to 1000 Jun 26 07:17:48.258818 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 26 07:17:48.261659 kernel: GPT:9289727 != 125829119 Jun 26 07:17:48.261702 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 26 07:17:48.261724 kernel: GPT:9289727 != 125829119 Jun 26 07:17:48.261744 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:17:48.261764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:17:48.261785 kernel: scsi host0: Virtio SCSI HBA Jun 26 07:17:48.262102 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 26 07:17:48.315144 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jun 26 07:17:48.315348 kernel: AVX2 version of gcm_enc/dec engaged. Jun 26 07:17:48.315374 kernel: AES CTR mode by8 optimization enabled Jun 26 07:17:48.315396 kernel: libata version 3.00 loaded. Jun 26 07:17:48.263044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:17:48.404903 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 26 07:17:48.405219 kernel: scsi host1: ata_piix Jun 26 07:17:48.405449 kernel: scsi host2: ata_piix Jun 26 07:17:48.405633 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 26 07:17:48.405657 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 26 07:17:48.405678 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Jun 26 07:17:48.405700 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (452) Jun 26 07:17:48.405730 kernel: ACPI: bus type USB registered Jun 26 07:17:48.405751 kernel: usbcore: registered new interface driver usbfs Jun 26 07:17:48.405773 kernel: usbcore: registered new interface driver hub Jun 26 07:17:48.405795 kernel: usbcore: registered new device driver usb Jun 26 07:17:48.263227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:17:48.263983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:17:48.265809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:17:48.266045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:48.266608 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:48.278161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:48.410247 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 26 07:17:48.413108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:48.420080 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 26 07:17:48.427536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:17:48.432316 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 26 07:17:48.432831 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 26 07:17:48.444097 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 26 07:17:48.446048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:17:48.451371 disk-uuid[533]: Primary Header is updated. Jun 26 07:17:48.451371 disk-uuid[533]: Secondary Entries is updated. Jun 26 07:17:48.451371 disk-uuid[533]: Secondary Header is updated. Jun 26 07:17:48.457626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:17:48.460910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:17:48.468882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:17:48.484380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:17:48.604890 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 26 07:17:48.612576 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 26 07:17:48.612727 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 26 07:17:48.612878 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 26 07:17:48.612989 kernel: hub 1-0:1.0: USB hub found Jun 26 07:17:48.613131 kernel: hub 1-0:1.0: 2 ports detected Jun 26 07:17:49.468939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:17:49.469213 disk-uuid[535]: The operation has completed successfully. Jun 26 07:17:49.513618 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 26 07:17:49.513747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 26 07:17:49.525090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 26 07:17:49.533530 sh[564]: Success Jun 26 07:17:49.562845 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 26 07:17:49.615284 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 26 07:17:49.629323 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 26 07:17:49.632686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 26 07:17:49.656061 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 26 07:17:49.656137 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:17:49.657116 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 26 07:17:49.659020 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 26 07:17:49.659084 kernel: BTRFS info (device dm-0): using free space tree Jun 26 07:17:49.667683 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 26 07:17:49.668895 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 26 07:17:49.675050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 26 07:17:49.679066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 26 07:17:49.690441 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:17:49.690511 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:17:49.690530 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:17:49.693828 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:17:49.706235 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 26 07:17:49.707326 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:17:49.714448 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 26 07:17:49.721358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 26 07:17:49.874155 ignition[658]: Ignition 2.19.0 Jun 26 07:17:49.874175 ignition[658]: Stage: fetch-offline Jun 26 07:17:49.874267 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:49.874287 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:49.876941 ignition[658]: parsed url from cmdline: "" Jun 26 07:17:49.876951 ignition[658]: no config URL provided Jun 26 07:17:49.876965 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:17:49.878899 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:17:49.876985 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:17:49.879908 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:17:49.876996 ignition[658]: failed to fetch config: resource requires networking Jun 26 07:17:49.877384 ignition[658]: Ignition finished successfully Jun 26 07:17:49.890147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:17:49.929349 systemd-networkd[754]: lo: Link UP Jun 26 07:17:49.929384 systemd-networkd[754]: lo: Gained carrier Jun 26 07:17:49.932911 systemd-networkd[754]: Enumeration completed Jun 26 07:17:49.933517 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:17:49.933524 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 26 07:17:49.934676 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:17:49.934682 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 26 07:17:49.935647 systemd-networkd[754]: eth0: Link UP Jun 26 07:17:49.935654 systemd-networkd[754]: eth0: Gained carrier Jun 26 07:17:49.935667 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:17:49.936006 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:17:49.936848 systemd[1]: Reached target network.target - Network. Jun 26 07:17:49.939273 systemd-networkd[754]: eth1: Link UP Jun 26 07:17:49.939293 systemd-networkd[754]: eth1: Gained carrier Jun 26 07:17:49.939312 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:17:49.946385 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 26 07:17:49.955991 systemd-networkd[754]: eth0: DHCPv4 address 64.23.160.249/20, gateway 64.23.160.1 acquired from 169.254.169.253 Jun 26 07:17:49.959937 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Jun 26 07:17:49.969215 ignition[756]: Ignition 2.19.0 Jun 26 07:17:49.969234 ignition[756]: Stage: fetch Jun 26 07:17:49.969705 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:49.969726 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:49.969922 ignition[756]: parsed url from cmdline: "" Jun 26 07:17:49.969930 ignition[756]: no config URL provided Jun 26 07:17:49.969941 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:17:49.969956 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:17:49.969986 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 26 07:17:50.004245 ignition[756]: GET result: OK Jun 26 07:17:50.004374 ignition[756]: parsing config with SHA512: 47489fb30484f05217056062ec81194c4935e54b2d7d933dd1e97a942dc6467c7d4ccebb3df5be3a3b64abfa674433230c80a09270778198bfaa352335c57701 Jun 26 07:17:50.012783 unknown[756]: fetched base config from "system" Jun 26 07:17:50.012813 unknown[756]: fetched base config from "system" Jun 26 07:17:50.012823 unknown[756]: fetched user config from "digitalocean" Jun 26 07:17:50.014896 ignition[756]: fetch: fetch complete Jun 26 07:17:50.014908 ignition[756]: fetch: fetch passed Jun 26 07:17:50.016931 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 26 07:17:50.014984 ignition[756]: Ignition finished successfully Jun 26 07:17:50.026270 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 26 07:17:50.051219 ignition[764]: Ignition 2.19.0 Jun 26 07:17:50.051236 ignition[764]: Stage: kargs Jun 26 07:17:50.051538 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:50.051558 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:50.052636 ignition[764]: kargs: kargs passed Jun 26 07:17:50.052690 ignition[764]: Ignition finished successfully Jun 26 07:17:50.055523 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 26 07:17:50.062193 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 26 07:17:50.091473 ignition[772]: Ignition 2.19.0 Jun 26 07:17:50.092321 ignition[772]: Stage: disks Jun 26 07:17:50.093046 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:50.093565 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:50.094841 ignition[772]: disks: disks passed Jun 26 07:17:50.094901 ignition[772]: Ignition finished successfully Jun 26 07:17:50.095914 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 26 07:17:50.097283 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 26 07:17:50.100544 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:17:50.101558 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:17:50.102425 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:17:50.103229 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:17:50.113059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 26 07:17:50.133551 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 26 07:17:50.137794 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 26 07:17:50.146968 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 26 07:17:50.272834 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 26 07:17:50.273922 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 26 07:17:50.275145 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 26 07:17:50.281008 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:17:50.293125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 26 07:17:50.298022 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 26 07:17:50.300992 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 26 07:17:50.305743 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Jun 26 07:17:50.305774 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:17:50.307018 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 26 07:17:50.311955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:17:50.311996 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:17:50.307156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:17:50.314820 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:17:50.321667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:17:50.329215 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 26 07:17:50.339051 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 26 07:17:50.414854 coreos-metadata[791]: Jun 26 07:17:50.414 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:17:50.423101 coreos-metadata[792]: Jun 26 07:17:50.423 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:17:50.424747 coreos-metadata[791]: Jun 26 07:17:50.424 INFO Fetch successful Jun 26 07:17:50.427342 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jun 26 07:17:50.431967 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 26 07:17:50.432723 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 26 07:17:50.435722 coreos-metadata[792]: Jun 26 07:17:50.435 INFO Fetch successful Jun 26 07:17:50.438005 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jun 26 07:17:50.443896 coreos-metadata[792]: Jun 26 07:17:50.443 INFO wrote hostname ci-4012.0.0-9-ba53898dab to /sysroot/etc/hostname Jun 26 07:17:50.446085 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:17:50.448031 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jun 26 07:17:50.453939 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jun 26 07:17:50.565725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 26 07:17:50.570965 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 26 07:17:50.574070 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 26 07:17:50.587828 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:17:50.609231 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 26 07:17:50.618624 ignition[910]: INFO : Ignition 2.19.0 Jun 26 07:17:50.618624 ignition[910]: INFO : Stage: mount Jun 26 07:17:50.620164 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:50.620164 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:50.620164 ignition[910]: INFO : mount: mount passed Jun 26 07:17:50.620164 ignition[910]: INFO : Ignition finished successfully Jun 26 07:17:50.620941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 26 07:17:50.629077 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 26 07:17:50.655485 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 26 07:17:50.668170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:17:50.678817 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jun 26 07:17:50.678883 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:17:50.678897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:17:50.680822 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:17:50.683836 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:17:50.686967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:17:50.720897 ignition[939]: INFO : Ignition 2.19.0 Jun 26 07:17:50.720897 ignition[939]: INFO : Stage: files Jun 26 07:17:50.722079 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:50.722079 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:50.723285 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jun 26 07:17:50.723876 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 26 07:17:50.723876 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 26 07:17:50.726909 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 26 07:17:50.727476 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 26 07:17:50.728090 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 26 07:17:50.727488 unknown[939]: wrote ssh authorized keys file for user: core Jun 26 07:17:50.729552 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 26 07:17:50.729552 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 26 07:17:50.729552 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:17:50.729552 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 26 07:17:50.754113 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 26 07:17:50.811597 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:17:50.811597 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 26 07:17:50.811597 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 26 07:17:51.284488 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jun 26 07:17:51.347643 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 26 07:17:51.348427 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jun 26 07:17:51.348427 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jun 26 07:17:51.348427 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:17:51.350220 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:17:51.358250 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:17:51.358250 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:17:51.358250 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 26 07:17:51.733090 systemd-networkd[754]: eth0: Gained IPv6LL Jun 26 07:17:51.797346 systemd-networkd[754]: eth1: Gained IPv6LL Jun 26 07:17:51.803868 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jun 26 07:17:52.144991 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 26 07:17:52.146039 ignition[939]: INFO : files: op(d): [started] processing unit "containerd.service" Jun 26 07:17:52.147504 ignition[939]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(d): [finished] processing unit "containerd.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 26 07:17:52.149044 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:17:52.149044 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:17:52.149044 ignition[939]: INFO : files: files passed Jun 26 07:17:52.149044 ignition[939]: INFO : Ignition finished successfully Jun 26 07:17:52.149672 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 26 07:17:52.157117 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 26 07:17:52.161055 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 26 07:17:52.165885 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 26 07:17:52.166014 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 26 07:17:52.187427 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:17:52.187427 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:17:52.189873 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:17:52.191683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:17:52.192657 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 26 07:17:52.197082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 26 07:17:52.252522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 26 07:17:52.252676 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 26 07:17:52.254492 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 26 07:17:52.254992 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 26 07:17:52.255921 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 26 07:17:52.262125 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 26 07:17:52.279923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:17:52.287086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 26 07:17:52.301593 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:17:52.302221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:17:52.303332 systemd[1]: Stopped target timers.target - Timer Units. Jun 26 07:17:52.304217 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 26 07:17:52.304324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:17:52.305277 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 26 07:17:52.305927 systemd[1]: Stopped target basic.target - Basic System. Jun 26 07:17:52.306820 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 26 07:17:52.307640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:17:52.308470 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 26 07:17:52.309677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 26 07:17:52.310549 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:17:52.311381 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 26 07:17:52.312139 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 26 07:17:52.312943 systemd[1]: Stopped target swap.target - Swaps. Jun 26 07:17:52.313896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 26 07:17:52.314031 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:17:52.314939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:17:52.315332 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:17:52.316164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 26 07:17:52.316411 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:17:52.317035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 26 07:17:52.317149 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 26 07:17:52.318388 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 26 07:17:52.318469 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:17:52.319216 systemd[1]: ignition-files.service: Deactivated successfully. Jun 26 07:17:52.319287 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 26 07:17:52.319986 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 26 07:17:52.320048 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:17:52.331134 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 26 07:17:52.334185 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 26 07:17:52.334287 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:17:52.338008 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 26 07:17:52.338435 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 26 07:17:52.338508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:17:52.339009 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 26 07:17:52.339071 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:17:52.340247 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 26 07:17:52.342884 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 26 07:17:52.368016 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 26 07:17:52.372879 ignition[992]: INFO : Ignition 2.19.0 Jun 26 07:17:52.372879 ignition[992]: INFO : Stage: umount Jun 26 07:17:52.372879 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:17:52.372879 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:17:52.391038 ignition[992]: INFO : umount: umount passed Jun 26 07:17:52.391038 ignition[992]: INFO : Ignition finished successfully Jun 26 07:17:52.384938 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 26 07:17:52.385058 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 26 07:17:52.386084 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 26 07:17:52.386187 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 26 07:17:52.386627 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 26 07:17:52.386688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 26 07:17:52.387611 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 26 07:17:52.387673 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 26 07:17:52.401835 systemd[1]: Stopped target network.target - Network. Jun 26 07:17:52.402576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 26 07:17:52.402677 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:17:52.403554 systemd[1]: Stopped target paths.target - Path Units. Jun 26 07:17:52.404275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 26 07:17:52.408147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:17:52.408751 systemd[1]: Stopped target slices.target - Slice Units. Jun 26 07:17:52.409559 systemd[1]: Stopped target sockets.target - Socket Units. Jun 26 07:17:52.410458 systemd[1]: iscsid.socket: Deactivated successfully. Jun 26 07:17:52.410526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:17:52.411459 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 26 07:17:52.411520 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:17:52.412229 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 26 07:17:52.412300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 26 07:17:52.413101 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 26 07:17:52.413166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 26 07:17:52.414543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 26 07:17:52.415419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 26 07:17:52.416444 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 26 07:17:52.416541 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 26 07:17:52.417701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 26 07:17:52.417861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 26 07:17:52.418962 systemd-networkd[754]: eth0: DHCPv6 lease lost Jun 26 07:17:52.421920 systemd-networkd[754]: eth1: DHCPv6 lease lost Jun 26 07:17:52.423219 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 26 07:17:52.423340 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 26 07:17:52.426088 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 26 07:17:52.426202 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 26 07:17:52.430632 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 26 07:17:52.430690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:17:52.436040 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 26 07:17:52.436472 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 26 07:17:52.436559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:17:52.437203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:17:52.437266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:17:52.437871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 26 07:17:52.437919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 26 07:17:52.439471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 26 07:17:52.439529 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:17:52.440166 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:17:52.454026 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 26 07:17:52.454152 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 26 07:17:52.455367 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 26 07:17:52.455533 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:17:52.457309 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 26 07:17:52.457462 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 26 07:17:52.458254 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 26 07:17:52.458293 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:17:52.459056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 26 07:17:52.459108 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:17:52.460036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 26 07:17:52.460075 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 26 07:17:52.460648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:17:52.460687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:17:52.466144 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 26 07:17:52.468244 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 26 07:17:52.468318 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:17:52.469098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:17:52.469160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:52.485935 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 26 07:17:52.486692 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 26 07:17:52.487596 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 26 07:17:52.493078 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 26 07:17:52.504105 systemd[1]: Switching root. Jun 26 07:17:52.536828 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 26 07:17:52.536922 systemd-journald[184]: Journal stopped Jun 26 07:17:53.769884 kernel: SELinux: policy capability network_peer_controls=1 Jun 26 07:17:53.769969 kernel: SELinux: policy capability open_perms=1 Jun 26 07:17:53.769988 kernel: SELinux: policy capability extended_socket_class=1 Jun 26 07:17:53.770006 kernel: SELinux: policy capability always_check_network=0 Jun 26 07:17:53.770022 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 26 07:17:53.770038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 26 07:17:53.770051 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 26 07:17:53.770066 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 26 07:17:53.770082 kernel: audit: type=1403 audit(1719386272.724:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 26 07:17:53.770099 systemd[1]: Successfully loaded SELinux policy in 41.561ms. Jun 26 07:17:53.770127 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.853ms. Jun 26 07:17:53.770143 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:17:53.770162 systemd[1]: Detected virtualization kvm. Jun 26 07:17:53.770175 systemd[1]: Detected architecture x86-64. Jun 26 07:17:53.770188 systemd[1]: Detected first boot. Jun 26 07:17:53.770203 systemd[1]: Hostname set to . Jun 26 07:17:53.770220 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:17:53.770234 zram_generator::config[1052]: No configuration found. Jun 26 07:17:53.770251 systemd[1]: Populated /etc with preset unit settings. Jun 26 07:17:53.770264 systemd[1]: Queued start job for default target multi-user.target. Jun 26 07:17:53.770276 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 26 07:17:53.770295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 26 07:17:53.770310 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 26 07:17:53.770322 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 26 07:17:53.770338 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 26 07:17:53.770351 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 26 07:17:53.770364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 26 07:17:53.770376 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 26 07:17:53.770389 systemd[1]: Created slice user.slice - User and Session Slice. Jun 26 07:17:53.770401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:17:53.770413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:17:53.770426 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 26 07:17:53.770441 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 26 07:17:53.770454 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 26 07:17:53.770467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:17:53.770479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 26 07:17:53.770491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:17:53.770503 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 26 07:17:53.770515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:17:53.770528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:17:53.770544 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:17:53.770560 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:17:53.770576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 26 07:17:53.770593 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 26 07:17:53.770608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:17:53.770621 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:17:53.770633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:17:53.770645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:17:53.770661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:17:53.770675 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 26 07:17:53.770692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 26 07:17:53.770704 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 26 07:17:53.770717 systemd[1]: Mounting media.mount - External Media Directory... Jun 26 07:17:53.770744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:53.770758 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 26 07:17:53.770771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 26 07:17:53.770785 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 26 07:17:53.773857 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 26 07:17:53.773887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:17:53.773901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:17:53.773914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 26 07:17:53.773928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:17:53.773940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:17:53.773952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:17:53.773965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 26 07:17:53.773986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:17:53.774000 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:17:53.774013 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 26 07:17:53.774027 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 26 07:17:53.774041 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:17:53.774053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:17:53.774065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 26 07:17:53.774082 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 26 07:17:53.774098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:17:53.774111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:53.774123 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 26 07:17:53.774136 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 26 07:17:53.774148 systemd[1]: Mounted media.mount - External Media Directory. Jun 26 07:17:53.774161 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 26 07:17:53.774207 systemd-journald[1141]: Collecting audit messages is disabled. Jun 26 07:17:53.774239 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 26 07:17:53.774251 kernel: loop: module loaded Jun 26 07:17:53.774266 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 26 07:17:53.774279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:17:53.774291 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 26 07:17:53.774304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 26 07:17:53.774318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:17:53.774330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:17:53.774346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:17:53.774359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:17:53.774372 systemd-journald[1141]: Journal started Jun 26 07:17:53.774396 systemd-journald[1141]: Runtime Journal (/run/log/journal/2137bea3331c4de3ba7536284455e644) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:17:53.779827 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:17:53.782827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:17:53.787602 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:17:53.789446 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 26 07:17:53.790381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:17:53.791284 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 26 07:17:53.797969 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 26 07:17:53.803851 kernel: fuse: init (API version 7.39) Jun 26 07:17:53.807559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 26 07:17:53.810431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 26 07:17:53.824236 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 26 07:17:53.834122 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 26 07:17:53.860156 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 26 07:17:53.861936 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:17:53.870476 kernel: ACPI: bus type drm_connector registered Jun 26 07:17:53.873693 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 26 07:17:53.885215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 26 07:17:53.887091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:17:53.896106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 26 07:17:53.897957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:17:53.907034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:17:53.914029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:17:53.919857 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:17:53.920132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:17:53.927235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 26 07:17:53.927878 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 26 07:17:53.939774 systemd-journald[1141]: Time spent on flushing to /var/log/journal/2137bea3331c4de3ba7536284455e644 is 65.299ms for 978 entries. Jun 26 07:17:53.939774 systemd-journald[1141]: System Journal (/var/log/journal/2137bea3331c4de3ba7536284455e644) is 8.0M, max 195.6M, 187.6M free. Jun 26 07:17:54.029119 systemd-journald[1141]: Received client request to flush runtime journal. Jun 26 07:17:53.948541 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 26 07:17:53.953590 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 26 07:17:53.978097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:17:53.993207 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 26 07:17:54.019860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:17:54.025388 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 26 07:17:54.034451 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 26 07:17:54.046896 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jun 26 07:17:54.046928 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jun 26 07:17:54.058081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:17:54.066117 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 26 07:17:54.106616 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 26 07:17:54.115132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:17:54.137833 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jun 26 07:17:54.137855 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jun 26 07:17:54.143696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:17:54.878022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 26 07:17:54.884184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:17:54.922481 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jun 26 07:17:54.955343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:17:54.964023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:17:54.994201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 26 07:17:55.075124 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 26 07:17:55.105783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:55.106040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:17:55.112084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:17:55.122254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:17:55.126589 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1226) Jun 26 07:17:55.135187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:17:55.135778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:17:55.135879 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:17:55.135929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:55.148973 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 26 07:17:55.163389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:17:55.163611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:17:55.190657 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1237) Jun 26 07:17:55.185913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:17:55.186263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:17:55.188221 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:17:55.190191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:17:55.205977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:17:55.206052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:17:55.251848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:17:55.311663 systemd-networkd[1228]: lo: Link UP Jun 26 07:17:55.311676 systemd-networkd[1228]: lo: Gained carrier Jun 26 07:17:55.316009 systemd-networkd[1228]: Enumeration completed Jun 26 07:17:55.316263 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:17:55.316505 systemd-networkd[1228]: eth0: Configuring with /run/systemd/network/10-9e:f2:ea:0c:e2:93.network. Jun 26 07:17:55.318106 systemd-networkd[1228]: eth1: Configuring with /run/systemd/network/10-0e:f2:24:59:55:8d.network. Jun 26 07:17:55.319055 systemd-networkd[1228]: eth0: Link UP Jun 26 07:17:55.319069 systemd-networkd[1228]: eth0: Gained carrier Jun 26 07:17:55.325070 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 26 07:17:55.325326 systemd-networkd[1228]: eth1: Link UP Jun 26 07:17:55.325333 systemd-networkd[1228]: eth1: Gained carrier Jun 26 07:17:55.361003 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 26 07:17:55.366819 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 26 07:17:55.371829 kernel: ACPI: button: Power Button [PWRF] Jun 26 07:17:55.431839 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 26 07:17:55.463833 kernel: mousedev: PS/2 mouse device common for all mice Jun 26 07:17:55.474159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:55.614841 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 26 07:17:55.614947 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 26 07:17:55.624849 kernel: Console: switching to colour dummy device 80x25 Jun 26 07:17:55.624939 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 26 07:17:55.624955 kernel: [drm] features: -context_init Jun 26 07:17:55.624970 kernel: [drm] number of scanouts: 1 Jun 26 07:17:55.625019 kernel: [drm] number of cap sets: 0 Jun 26 07:17:55.627628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:55.631827 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 26 07:17:55.641953 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 26 07:17:55.644475 kernel: Console: switching to colour frame buffer device 128x48 Jun 26 07:17:55.656834 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 26 07:17:55.658350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:17:55.660270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:55.662389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:55.678024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:55.684215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:17:55.684607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:55.696965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:17:55.735902 kernel: EDAC MC: Ver: 3.0.0 Jun 26 07:17:55.760005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:17:55.771782 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 26 07:17:55.782178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 26 07:17:55.799248 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:17:55.836407 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 26 07:17:55.838142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:17:55.846162 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 26 07:17:55.857313 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:17:55.892547 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 26 07:17:55.895571 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:17:55.907044 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 26 07:17:55.907698 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 26 07:17:55.907763 systemd[1]: Reached target machines.target - Containers. Jun 26 07:17:55.910352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 26 07:17:55.929833 kernel: ISO 9660 Extensions: RRIP_1991A Jun 26 07:17:55.932576 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 26 07:17:55.934904 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:17:55.938931 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 26 07:17:55.948200 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 26 07:17:55.953691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 26 07:17:55.955440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:17:55.967062 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 26 07:17:55.972521 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 26 07:17:55.977987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 26 07:17:55.982297 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 26 07:17:55.998266 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 26 07:17:56.001915 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 26 07:17:56.017742 kernel: loop0: detected capacity change from 0 to 8 Jun 26 07:17:56.017925 kernel: block loop0: the capability attribute has been deprecated. Jun 26 07:17:56.036171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 26 07:17:56.058969 kernel: loop1: detected capacity change from 0 to 139760 Jun 26 07:17:56.114349 kernel: loop2: detected capacity change from 0 to 209816 Jun 26 07:17:56.147884 kernel: loop3: detected capacity change from 0 to 80568 Jun 26 07:17:56.183606 kernel: loop4: detected capacity change from 0 to 8 Jun 26 07:17:56.188075 kernel: loop5: detected capacity change from 0 to 139760 Jun 26 07:17:56.215837 kernel: loop6: detected capacity change from 0 to 209816 Jun 26 07:17:56.242386 kernel: loop7: detected capacity change from 0 to 80568 Jun 26 07:17:56.254284 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 26 07:17:56.254990 (sd-merge)[1320]: Merged extensions into '/usr'. Jun 26 07:17:56.262697 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Jun 26 07:17:56.262722 systemd[1]: Reloading... Jun 26 07:17:56.396962 zram_generator::config[1346]: No configuration found. Jun 26 07:17:56.606272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:17:56.610839 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 26 07:17:56.677086 systemd[1]: Reloading finished in 413 ms. Jun 26 07:17:56.694506 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 26 07:17:56.696428 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 26 07:17:56.708034 systemd[1]: Starting ensure-sysext.service... Jun 26 07:17:56.710791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:17:56.725625 systemd[1]: Reloading requested from client PID 1396 ('systemctl') (unit ensure-sysext.service)... Jun 26 07:17:56.725651 systemd[1]: Reloading... Jun 26 07:17:56.765342 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 26 07:17:56.765767 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 26 07:17:56.767479 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 26 07:17:56.769453 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Jun 26 07:17:56.769736 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Jun 26 07:17:56.774514 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:17:56.774694 systemd-tmpfiles[1397]: Skipping /boot Jun 26 07:17:56.790861 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:17:56.791002 systemd-tmpfiles[1397]: Skipping /boot Jun 26 07:17:56.835614 zram_generator::config[1424]: No configuration found. Jun 26 07:17:57.045024 systemd-networkd[1228]: eth1: Gained IPv6LL Jun 26 07:17:57.050127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:17:57.133940 systemd[1]: Reloading finished in 407 ms. Jun 26 07:17:57.153118 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 26 07:17:57.156772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:17:57.173140 systemd-networkd[1228]: eth0: Gained IPv6LL Jun 26 07:17:57.177227 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:17:57.192306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 26 07:17:57.200026 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 26 07:17:57.216527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:17:57.232533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 26 07:17:57.251235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.252141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:17:57.260183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:17:57.284315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:17:57.293607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:17:57.294265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:17:57.294582 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.303623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.304207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:17:57.304989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:17:57.305109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.312576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.315685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:17:57.332871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:17:57.347624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:17:57.351009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:17:57.352570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:17:57.357007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:17:57.364113 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:17:57.364489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:17:57.371040 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 26 07:17:57.379834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:17:57.380199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:17:57.386844 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:17:57.387216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:17:57.399696 systemd[1]: Finished ensure-sysext.service. Jun 26 07:17:57.407421 augenrules[1505]: No rules Jun 26 07:17:57.413761 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:17:57.423526 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 26 07:17:57.441662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:17:57.442177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:17:57.458257 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 26 07:17:57.477120 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 26 07:17:57.479142 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:17:57.488729 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 26 07:17:57.532822 systemd-resolved[1481]: Positive Trust Anchors: Jun 26 07:17:57.532843 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:17:57.532902 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:17:57.537583 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 26 07:17:57.542066 systemd-resolved[1481]: Using system hostname 'ci-4012.0.0-9-ba53898dab'. Jun 26 07:17:57.550396 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:17:57.552715 systemd[1]: Reached target network.target - Network. Jun 26 07:17:57.553662 systemd[1]: Reached target network-online.target - Network is Online. Jun 26 07:17:57.558650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:17:57.614166 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 26 07:17:57.615754 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:17:57.620880 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 26 07:17:57.621714 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 26 07:17:57.622572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 26 07:17:57.623277 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 26 07:17:57.623341 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:17:57.625492 systemd[1]: Reached target time-set.target - System Time Set. Jun 26 07:17:57.626601 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 26 07:17:57.627628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 26 07:17:57.628627 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:17:57.635056 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 26 07:17:57.642221 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 26 07:17:57.647577 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 26 07:17:57.655118 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 26 07:17:57.657317 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:17:57.658666 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:17:57.660075 systemd[1]: System is tainted: cgroupsv1 Jun 26 07:17:57.660180 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:17:57.660220 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:17:58.084423 systemd-timesyncd[1524]: Contacted time server 216.229.4.66:123 (0.flatcar.pool.ntp.org). Jun 26 07:17:58.084514 systemd-timesyncd[1524]: Initial clock synchronization to Wed 2024-06-26 07:17:58.084203 UTC. Jun 26 07:17:58.084595 systemd-resolved[1481]: Clock change detected. Flushing caches. Jun 26 07:17:58.097194 systemd[1]: Starting containerd.service - containerd container runtime... Jun 26 07:17:58.104297 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 26 07:17:58.110399 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 26 07:17:58.136211 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 26 07:17:58.148697 jq[1536]: false Jun 26 07:17:58.156204 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 26 07:17:58.159767 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 26 07:17:58.176176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:58.193255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 26 07:17:58.205862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 26 07:17:58.222159 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 26 07:17:58.230667 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 26 07:17:58.241064 extend-filesystems[1539]: Found loop4 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found loop5 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found loop6 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found loop7 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda1 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda2 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda3 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found usr Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda4 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda6 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda7 Jun 26 07:17:58.250762 extend-filesystems[1539]: Found vda9 Jun 26 07:17:58.250762 extend-filesystems[1539]: Checking size of /dev/vda9 Jun 26 07:17:58.247222 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 26 07:17:58.264584 dbus-daemon[1534]: [system] SELinux support is enabled Jun 26 07:17:58.335348 coreos-metadata[1533]: Jun 26 07:17:58.258 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:17:58.335348 coreos-metadata[1533]: Jun 26 07:17:58.289 INFO Fetch successful Jun 26 07:17:58.277894 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 26 07:17:58.293060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 26 07:17:58.311311 systemd[1]: Starting update-engine.service - Update Engine... Jun 26 07:17:58.339133 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 26 07:17:58.347405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 26 07:17:58.371402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 26 07:17:58.371768 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 26 07:17:58.379572 extend-filesystems[1539]: Resized partition /dev/vda9 Jun 26 07:17:58.401530 extend-filesystems[1579]: resize2fs 1.47.0 (5-Feb-2023) Jun 26 07:17:58.422249 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 26 07:17:58.399753 systemd[1]: motdgen.service: Deactivated successfully. Jun 26 07:17:58.400175 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 26 07:17:58.412673 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 26 07:17:58.438868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 26 07:17:58.439193 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 26 07:17:58.467679 jq[1565]: true Jun 26 07:17:58.480427 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 26 07:17:58.527748 update_engine[1562]: I0626 07:17:58.526679 1562 main.cc:92] Flatcar Update Engine starting Jun 26 07:17:58.527473 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 26 07:17:58.562648 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 26 07:17:58.562872 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 26 07:17:58.562966 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 26 07:17:58.586663 jq[1593]: true Jun 26 07:17:58.573435 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 26 07:17:58.573718 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 26 07:17:58.573766 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 26 07:17:58.594210 systemd-logind[1553]: New seat seat0. Jun 26 07:17:58.595800 systemd-logind[1553]: Watching system buttons on /dev/input/event1 (Power Button) Jun 26 07:17:58.595826 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 26 07:17:58.597251 systemd[1]: Started systemd-logind.service - User Login Management. Jun 26 07:17:58.604697 tar[1583]: linux-amd64/helm Jun 26 07:17:58.607562 systemd[1]: Started update-engine.service - Update Engine. Jun 26 07:17:58.615007 update_engine[1562]: I0626 07:17:58.612061 1562 update_check_scheduler.cc:74] Next update check in 9m9s Jun 26 07:17:58.617213 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 26 07:17:58.631515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 26 07:17:58.786265 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 26 07:17:58.798069 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 26 07:17:58.798069 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 26 07:17:58.798069 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 26 07:17:58.824318 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Jun 26 07:17:58.824318 extend-filesystems[1539]: Found vdb Jun 26 07:17:58.824689 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 26 07:17:58.825137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 26 07:17:58.908264 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:17:58.909964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 26 07:17:58.963507 systemd[1]: Starting sshkeys.service... Jun 26 07:17:59.004580 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1606) Jun 26 07:17:59.026822 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 26 07:17:59.050280 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 26 07:17:59.065487 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 26 07:17:59.165020 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 26 07:17:59.266522 coreos-metadata[1646]: Jun 26 07:17:59.264 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:17:59.299786 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 26 07:17:59.309822 coreos-metadata[1646]: Jun 26 07:17:59.309 INFO Fetch successful Jun 26 07:17:59.319702 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 26 07:17:59.352431 unknown[1646]: wrote ssh authorized keys file for user: core Jun 26 07:17:59.389604 systemd[1]: issuegen.service: Deactivated successfully. Jun 26 07:17:59.392440 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 26 07:17:59.412782 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 26 07:17:59.424026 update-ssh-keys[1666]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:17:59.428375 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 26 07:17:59.459706 systemd[1]: Finished sshkeys.service. Jun 26 07:17:59.512564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 26 07:17:59.526882 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 26 07:17:59.561448 containerd[1585]: time="2024-06-26T07:17:59.561259047Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 26 07:17:59.564492 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 26 07:17:59.565641 systemd[1]: Reached target getty.target - Login Prompts. Jun 26 07:17:59.625090 containerd[1585]: time="2024-06-26T07:17:59.624911039Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 26 07:17:59.625927 containerd[1585]: time="2024-06-26T07:17:59.625582045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.628260 containerd[1585]: time="2024-06-26T07:17:59.628192643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.628850794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.629369387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.629405980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.629568105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.630876779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.630910898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.631072267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.631440261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.631474436Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.631492155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632013 containerd[1585]: time="2024-06-26T07:17:59.631814704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:17:59.632596 containerd[1585]: time="2024-06-26T07:17:59.631841058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 26 07:17:59.632596 containerd[1585]: time="2024-06-26T07:17:59.631944216Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 26 07:17:59.632596 containerd[1585]: time="2024-06-26T07:17:59.631964704Z" level=info msg="metadata content store policy set" policy=shared Jun 26 07:17:59.643769 containerd[1585]: time="2024-06-26T07:17:59.643666847Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 26 07:17:59.644140 containerd[1585]: time="2024-06-26T07:17:59.644105690Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 26 07:17:59.644271 containerd[1585]: time="2024-06-26T07:17:59.644246848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 26 07:17:59.644453 containerd[1585]: time="2024-06-26T07:17:59.644431285Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 26 07:17:59.644878 containerd[1585]: time="2024-06-26T07:17:59.644827611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 26 07:17:59.646650 containerd[1585]: time="2024-06-26T07:17:59.646467300Z" level=info msg="NRI interface is disabled by configuration." Jun 26 07:17:59.646902 containerd[1585]: time="2024-06-26T07:17:59.646867789Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 26 07:17:59.647472 containerd[1585]: time="2024-06-26T07:17:59.647431417Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 26 07:17:59.650143 containerd[1585]: time="2024-06-26T07:17:59.650060651Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 26 07:17:59.650723 containerd[1585]: time="2024-06-26T07:17:59.650656057Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 26 07:17:59.650925 containerd[1585]: time="2024-06-26T07:17:59.650880837Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 26 07:17:59.651093 containerd[1585]: time="2024-06-26T07:17:59.651058633Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.651219 containerd[1585]: time="2024-06-26T07:17:59.651197072Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.651328 containerd[1585]: time="2024-06-26T07:17:59.651307913Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653066304Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653155867Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653193491Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653225151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653253385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.653673458Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.654671725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.654747826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.654791029Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 26 07:17:59.655019 containerd[1585]: time="2024-06-26T07:17:59.654849896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 26 07:17:59.656702 containerd[1585]: time="2024-06-26T07:17:59.654969348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.657016 containerd[1585]: time="2024-06-26T07:17:59.656851381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.657171 containerd[1585]: time="2024-06-26T07:17:59.657148242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.657296 containerd[1585]: time="2024-06-26T07:17:59.657275176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.658848 containerd[1585]: time="2024-06-26T07:17:59.658801164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.659021 containerd[1585]: time="2024-06-26T07:17:59.658998947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.659155 containerd[1585]: time="2024-06-26T07:17:59.659128954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.659282 containerd[1585]: time="2024-06-26T07:17:59.659261276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.659365 containerd[1585]: time="2024-06-26T07:17:59.659349064Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 26 07:17:59.659754 containerd[1585]: time="2024-06-26T07:17:59.659719382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.659894 containerd[1585]: time="2024-06-26T07:17:59.659873254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.660066 containerd[1585]: time="2024-06-26T07:17:59.660041538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.660156 containerd[1585]: time="2024-06-26T07:17:59.660139830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.660354 containerd[1585]: time="2024-06-26T07:17:59.660330557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.663014 containerd[1585]: time="2024-06-26T07:17:59.662553555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.663014 containerd[1585]: time="2024-06-26T07:17:59.662663196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.663014 containerd[1585]: time="2024-06-26T07:17:59.662707865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 26 07:17:59.665090 containerd[1585]: time="2024-06-26T07:17:59.663839484Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 26 07:17:59.665090 containerd[1585]: time="2024-06-26T07:17:59.664014529Z" level=info msg="Connect containerd service" Jun 26 07:17:59.665090 containerd[1585]: time="2024-06-26T07:17:59.664109154Z" level=info msg="using legacy CRI server" Jun 26 07:17:59.665090 containerd[1585]: time="2024-06-26T07:17:59.664127772Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 26 07:17:59.665090 containerd[1585]: time="2024-06-26T07:17:59.664337292Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.669931043Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.670123652Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.670296714Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.670362407Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.670393693Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.670187375Z" level=info msg="Start subscribing containerd event" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.671116442Z" level=info msg="Start recovering state" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.671322213Z" level=info msg="Start event monitor" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.671386054Z" level=info msg="Start snapshots syncer" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.671409859Z" level=info msg="Start cni network conf syncer for default" Jun 26 07:17:59.672174 containerd[1585]: time="2024-06-26T07:17:59.671427094Z" level=info msg="Start streaming server" Jun 26 07:17:59.679030 containerd[1585]: time="2024-06-26T07:17:59.676628081Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 26 07:17:59.679030 containerd[1585]: time="2024-06-26T07:17:59.676763192Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 26 07:17:59.682827 containerd[1585]: time="2024-06-26T07:17:59.682054659Z" level=info msg="containerd successfully booted in 0.158364s" Jun 26 07:17:59.682408 systemd[1]: Started containerd.service - containerd container runtime. Jun 26 07:18:00.438011 tar[1583]: linux-amd64/LICENSE Jun 26 07:18:00.438011 tar[1583]: linux-amd64/README.md Jun 26 07:18:00.484374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 26 07:18:01.215572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:01.219757 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 26 07:18:01.221748 systemd[1]: Startup finished in 7.294s (kernel) + 8.115s (userspace) = 15.409s. Jun 26 07:18:01.241888 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:18:02.764615 kubelet[1699]: E0626 07:18:02.763971 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:18:02.768638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:18:02.770467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:18:06.665600 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 26 07:18:06.673483 systemd[1]: Started sshd@0-64.23.160.249:22-147.75.109.163:59350.service - OpenSSH per-connection server daemon (147.75.109.163:59350). Jun 26 07:18:06.761769 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 59350 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:06.764881 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:06.776403 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 26 07:18:06.787452 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 26 07:18:06.791028 systemd-logind[1553]: New session 1 of user core. Jun 26 07:18:06.808330 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 26 07:18:06.819483 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 26 07:18:06.825843 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.000280 systemd[1718]: Queued start job for default target default.target. Jun 26 07:18:07.001887 systemd[1718]: Created slice app.slice - User Application Slice. Jun 26 07:18:07.001950 systemd[1718]: Reached target paths.target - Paths. Jun 26 07:18:07.002008 systemd[1718]: Reached target timers.target - Timers. Jun 26 07:18:07.009168 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 26 07:18:07.032067 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 26 07:18:07.032141 systemd[1718]: Reached target sockets.target - Sockets. Jun 26 07:18:07.032157 systemd[1718]: Reached target basic.target - Basic System. Jun 26 07:18:07.032234 systemd[1718]: Reached target default.target - Main User Target. Jun 26 07:18:07.032279 systemd[1718]: Startup finished in 197ms. Jun 26 07:18:07.033114 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 26 07:18:07.044565 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 26 07:18:07.114530 systemd[1]: Started sshd@1-64.23.160.249:22-147.75.109.163:59364.service - OpenSSH per-connection server daemon (147.75.109.163:59364). Jun 26 07:18:07.158466 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 59364 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:07.160413 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.166947 systemd-logind[1553]: New session 2 of user core. Jun 26 07:18:07.176475 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 26 07:18:07.245653 sshd[1730]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.249769 systemd[1]: sshd@1-64.23.160.249:22-147.75.109.163:59364.service: Deactivated successfully. Jun 26 07:18:07.254241 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Jun 26 07:18:07.260845 systemd[1]: Started sshd@2-64.23.160.249:22-147.75.109.163:59370.service - OpenSSH per-connection server daemon (147.75.109.163:59370). Jun 26 07:18:07.261686 systemd[1]: session-2.scope: Deactivated successfully. Jun 26 07:18:07.264398 systemd-logind[1553]: Removed session 2. Jun 26 07:18:07.308396 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 59370 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:07.310460 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.316536 systemd-logind[1553]: New session 3 of user core. Jun 26 07:18:07.324567 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 26 07:18:07.386281 sshd[1738]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.404638 systemd[1]: Started sshd@3-64.23.160.249:22-147.75.109.163:59378.service - OpenSSH per-connection server daemon (147.75.109.163:59378). Jun 26 07:18:07.407983 systemd[1]: sshd@2-64.23.160.249:22-147.75.109.163:59370.service: Deactivated successfully. Jun 26 07:18:07.412804 systemd[1]: session-3.scope: Deactivated successfully. Jun 26 07:18:07.414751 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Jun 26 07:18:07.417162 systemd-logind[1553]: Removed session 3. Jun 26 07:18:07.442748 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 59378 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:07.444682 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.450899 systemd-logind[1553]: New session 4 of user core. Jun 26 07:18:07.461623 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 26 07:18:07.527196 sshd[1743]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.536477 systemd[1]: Started sshd@4-64.23.160.249:22-147.75.109.163:59388.service - OpenSSH per-connection server daemon (147.75.109.163:59388). Jun 26 07:18:07.537247 systemd[1]: sshd@3-64.23.160.249:22-147.75.109.163:59378.service: Deactivated successfully. Jun 26 07:18:07.543462 systemd[1]: session-4.scope: Deactivated successfully. Jun 26 07:18:07.545562 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Jun 26 07:18:07.549350 systemd-logind[1553]: Removed session 4. Jun 26 07:18:07.584513 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 59388 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:07.586405 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.592878 systemd-logind[1553]: New session 5 of user core. Jun 26 07:18:07.599513 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 26 07:18:07.676795 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 26 07:18:07.677592 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:18:07.695949 sudo[1758]: pam_unix(sudo:session): session closed for user root Jun 26 07:18:07.701370 sshd[1751]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.714664 systemd[1]: Started sshd@5-64.23.160.249:22-147.75.109.163:59404.service - OpenSSH per-connection server daemon (147.75.109.163:59404). Jun 26 07:18:07.715924 systemd[1]: sshd@4-64.23.160.249:22-147.75.109.163:59388.service: Deactivated successfully. Jun 26 07:18:07.720774 systemd[1]: session-5.scope: Deactivated successfully. Jun 26 07:18:07.723129 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Jun 26 07:18:07.726387 systemd-logind[1553]: Removed session 5. Jun 26 07:18:07.763768 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 59404 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:07.766503 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:07.773876 systemd-logind[1553]: New session 6 of user core. Jun 26 07:18:07.783420 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 26 07:18:07.846779 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 26 07:18:07.847683 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:18:07.853508 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 26 07:18:07.861820 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 26 07:18:07.862182 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:18:07.878823 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 26 07:18:07.883842 auditctl[1771]: No rules Jun 26 07:18:07.884534 systemd[1]: audit-rules.service: Deactivated successfully. Jun 26 07:18:07.884781 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 26 07:18:07.889752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:18:07.933018 augenrules[1790]: No rules Jun 26 07:18:07.935140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:18:07.937689 sudo[1767]: pam_unix(sudo:session): session closed for user root Jun 26 07:18:07.942065 sshd[1761]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:07.951467 systemd[1]: Started sshd@6-64.23.160.249:22-147.75.109.163:59418.service - OpenSSH per-connection server daemon (147.75.109.163:59418). Jun 26 07:18:07.952233 systemd[1]: sshd@5-64.23.160.249:22-147.75.109.163:59404.service: Deactivated successfully. Jun 26 07:18:07.954408 systemd[1]: session-6.scope: Deactivated successfully. Jun 26 07:18:07.957186 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Jun 26 07:18:07.959395 systemd-logind[1553]: Removed session 6. Jun 26 07:18:07.999611 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 59418 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:08.001768 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:08.008679 systemd-logind[1553]: New session 7 of user core. Jun 26 07:18:08.014646 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 26 07:18:08.078073 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 26 07:18:08.078368 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:18:08.288467 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 26 07:18:08.288692 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 26 07:18:08.749048 dockerd[1812]: time="2024-06-26T07:18:08.748940932Z" level=info msg="Starting up" Jun 26 07:18:08.845697 systemd[1]: var-lib-docker-metacopy\x2dcheck3005338734-merged.mount: Deactivated successfully. Jun 26 07:18:08.869211 dockerd[1812]: time="2024-06-26T07:18:08.869157313Z" level=info msg="Loading containers: start." Jun 26 07:18:09.000232 kernel: Initializing XFRM netlink socket Jun 26 07:18:09.130174 systemd-networkd[1228]: docker0: Link UP Jun 26 07:18:09.143107 dockerd[1812]: time="2024-06-26T07:18:09.143010478Z" level=info msg="Loading containers: done." Jun 26 07:18:09.259473 dockerd[1812]: time="2024-06-26T07:18:09.259416769Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 26 07:18:09.259748 dockerd[1812]: time="2024-06-26T07:18:09.259705522Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 26 07:18:09.259874 dockerd[1812]: time="2024-06-26T07:18:09.259856783Z" level=info msg="Daemon has completed initialization" Jun 26 07:18:09.315714 dockerd[1812]: time="2024-06-26T07:18:09.315234557Z" level=info msg="API listen on /run/docker.sock" Jun 26 07:18:09.316177 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 26 07:18:10.309778 containerd[1585]: time="2024-06-26T07:18:10.309292180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 26 07:18:10.945703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335871451.mount: Deactivated successfully. Jun 26 07:18:12.637189 containerd[1585]: time="2024-06-26T07:18:12.637048933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:12.639325 containerd[1585]: time="2024-06-26T07:18:12.639258557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 26 07:18:12.639487 containerd[1585]: time="2024-06-26T07:18:12.639406333Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:12.646349 containerd[1585]: time="2024-06-26T07:18:12.646216517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:12.648761 containerd[1585]: time="2024-06-26T07:18:12.648369470Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.339021625s" Jun 26 07:18:12.648761 containerd[1585]: time="2024-06-26T07:18:12.648458631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 26 07:18:12.681210 containerd[1585]: time="2024-06-26T07:18:12.680845348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 26 07:18:13.019338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 26 07:18:13.027368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:13.183214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:13.193556 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:18:13.285036 kubelet[2019]: E0626 07:18:13.283015 2019 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:18:13.289699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:18:13.290018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:18:14.567299 containerd[1585]: time="2024-06-26T07:18:14.567223035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:14.568215 containerd[1585]: time="2024-06-26T07:18:14.568112525Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 26 07:18:14.570008 containerd[1585]: time="2024-06-26T07:18:14.568928615Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:14.573004 containerd[1585]: time="2024-06-26T07:18:14.572093848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:14.573642 containerd[1585]: time="2024-06-26T07:18:14.573611119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.892711678s" Jun 26 07:18:14.573708 containerd[1585]: time="2024-06-26T07:18:14.573646757Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 26 07:18:14.603588 containerd[1585]: time="2024-06-26T07:18:14.603533509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 26 07:18:15.838211 containerd[1585]: time="2024-06-26T07:18:15.838136585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:15.839729 containerd[1585]: time="2024-06-26T07:18:15.839661073Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 26 07:18:15.839998 containerd[1585]: time="2024-06-26T07:18:15.839950026Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:15.845457 containerd[1585]: time="2024-06-26T07:18:15.845372044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:15.851174 containerd[1585]: time="2024-06-26T07:18:15.851103726Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.247517379s" Jun 26 07:18:15.851174 containerd[1585]: time="2024-06-26T07:18:15.851174034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 26 07:18:15.888589 containerd[1585]: time="2024-06-26T07:18:15.888541776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 26 07:18:17.111269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47281051.mount: Deactivated successfully. Jun 26 07:18:17.610192 containerd[1585]: time="2024-06-26T07:18:17.610020075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:17.611586 containerd[1585]: time="2024-06-26T07:18:17.611343327Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 26 07:18:17.612239 containerd[1585]: time="2024-06-26T07:18:17.612187539Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:17.615327 containerd[1585]: time="2024-06-26T07:18:17.615274020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:17.616536 containerd[1585]: time="2024-06-26T07:18:17.616343286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.727542299s" Jun 26 07:18:17.616536 containerd[1585]: time="2024-06-26T07:18:17.616398745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 26 07:18:17.643963 containerd[1585]: time="2024-06-26T07:18:17.643887734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 26 07:18:18.265352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340635997.mount: Deactivated successfully. Jun 26 07:18:18.277029 containerd[1585]: time="2024-06-26T07:18:18.276165334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:18.279111 containerd[1585]: time="2024-06-26T07:18:18.279021788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 26 07:18:18.280245 containerd[1585]: time="2024-06-26T07:18:18.280148784Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:18.283590 containerd[1585]: time="2024-06-26T07:18:18.283453878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:18.285968 containerd[1585]: time="2024-06-26T07:18:18.284436015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 640.458197ms" Jun 26 07:18:18.285968 containerd[1585]: time="2024-06-26T07:18:18.284477385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 26 07:18:18.333832 containerd[1585]: time="2024-06-26T07:18:18.333710731Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 26 07:18:18.880714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819609460.mount: Deactivated successfully. Jun 26 07:18:20.626150 containerd[1585]: time="2024-06-26T07:18:20.626084446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:20.627778 containerd[1585]: time="2024-06-26T07:18:20.627571831Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 26 07:18:20.628514 containerd[1585]: time="2024-06-26T07:18:20.628475139Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:20.632825 containerd[1585]: time="2024-06-26T07:18:20.632767773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:20.634736 containerd[1585]: time="2024-06-26T07:18:20.634326017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.300515305s" Jun 26 07:18:20.634736 containerd[1585]: time="2024-06-26T07:18:20.634381004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 26 07:18:20.669067 containerd[1585]: time="2024-06-26T07:18:20.669012934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 26 07:18:21.328366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468167298.mount: Deactivated successfully. Jun 26 07:18:22.047946 containerd[1585]: time="2024-06-26T07:18:22.047829520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:22.050045 containerd[1585]: time="2024-06-26T07:18:22.049443616Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 26 07:18:22.051162 containerd[1585]: time="2024-06-26T07:18:22.051106163Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:22.055118 containerd[1585]: time="2024-06-26T07:18:22.055025406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:22.056365 containerd[1585]: time="2024-06-26T07:18:22.056136803Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.387056667s" Jun 26 07:18:22.056365 containerd[1585]: time="2024-06-26T07:18:22.056199769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 26 07:18:23.540349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 26 07:18:23.550835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:23.754335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:23.766970 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:18:23.862508 kubelet[2194]: E0626 07:18:23.862121 2194 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:18:23.868307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:18:23.868604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:18:25.763918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:25.775529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:25.837330 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-7.scope)... Jun 26 07:18:25.837352 systemd[1]: Reloading... Jun 26 07:18:25.991014 zram_generator::config[2247]: No configuration found. Jun 26 07:18:26.205450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:18:26.332232 systemd[1]: Reloading finished in 494 ms. Jun 26 07:18:26.420626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:26.423927 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:18:26.430422 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:26.431789 systemd[1]: kubelet.service: Deactivated successfully. Jun 26 07:18:26.432687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:26.448553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:26.631268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:26.642723 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:18:26.724769 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:18:26.724769 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:18:26.724769 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:18:26.725448 kubelet[2319]: I0626 07:18:26.724865 2319 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:18:27.451016 kubelet[2319]: I0626 07:18:27.450940 2319 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 26 07:18:27.451016 kubelet[2319]: I0626 07:18:27.450995 2319 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:18:27.451320 kubelet[2319]: I0626 07:18:27.451283 2319 server.go:895] "Client rotation is on, will bootstrap in background" Jun 26 07:18:27.472339 kubelet[2319]: I0626 07:18:27.471847 2319 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:18:27.475425 kubelet[2319]: E0626 07:18:27.475070 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.160.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.490465 kubelet[2319]: I0626 07:18:27.490420 2319 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:18:27.493935 kubelet[2319]: I0626 07:18:27.493266 2319 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:18:27.493935 kubelet[2319]: I0626 07:18:27.493567 2319 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:18:27.493935 kubelet[2319]: I0626 07:18:27.493602 2319 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:18:27.493935 kubelet[2319]: I0626 07:18:27.493616 2319 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:18:27.494919 kubelet[2319]: I0626 07:18:27.494876 2319 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:18:27.496968 kubelet[2319]: I0626 07:18:27.496937 2319 kubelet.go:393] "Attempting to sync node with API server" Jun 26 07:18:27.497169 kubelet[2319]: I0626 07:18:27.497153 2319 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:18:27.497275 kubelet[2319]: I0626 07:18:27.497265 2319 kubelet.go:309] "Adding apiserver pod source" Jun 26 07:18:27.497349 kubelet[2319]: I0626 07:18:27.497341 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:18:27.498911 kubelet[2319]: W0626 07:18:27.498832 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.160.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-9-ba53898dab&limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.498911 kubelet[2319]: E0626 07:18:27.498918 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.160.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-9-ba53898dab&limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.499577 kubelet[2319]: W0626 07:18:27.499524 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.160.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.499877 kubelet[2319]: E0626 07:18:27.499586 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.160.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.500095 kubelet[2319]: I0626 07:18:27.500079 2319 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:18:27.505186 kubelet[2319]: W0626 07:18:27.505027 2319 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 26 07:18:27.506216 kubelet[2319]: I0626 07:18:27.506182 2319 server.go:1232] "Started kubelet" Jun 26 07:18:27.508134 kubelet[2319]: I0626 07:18:27.508106 2319 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:18:27.510225 kubelet[2319]: I0626 07:18:27.510193 2319 server.go:462] "Adding debug handlers to kubelet server" Jun 26 07:18:27.510919 kubelet[2319]: I0626 07:18:27.510888 2319 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 26 07:18:27.511428 kubelet[2319]: I0626 07:18:27.511382 2319 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:18:27.511853 kubelet[2319]: E0626 07:18:27.511704 2319 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-9-ba53898dab.17dc7cbcaa9320dd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-9-ba53898dab", UID:"ci-4012.0.0-9-ba53898dab", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-9-ba53898dab"}, FirstTimestamp:time.Date(2024, time.June, 26, 7, 18, 27, 506151645, time.Local), LastTimestamp:time.Date(2024, time.June, 26, 7, 18, 27, 506151645, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-9-ba53898dab"}': 'Post "https://64.23.160.249:6443/api/v1/namespaces/default/events": dial tcp 64.23.160.249:6443: connect: connection refused'(may retry after sleeping) Jun 26 07:18:27.516363 kubelet[2319]: E0626 07:18:27.516155 2319 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 26 07:18:27.516363 kubelet[2319]: E0626 07:18:27.516197 2319 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:18:27.520496 kubelet[2319]: I0626 07:18:27.520101 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:18:27.524684 kubelet[2319]: E0626 07:18:27.523805 2319 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-9-ba53898dab\" not found" Jun 26 07:18:27.524684 kubelet[2319]: I0626 07:18:27.523863 2319 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:18:27.524684 kubelet[2319]: I0626 07:18:27.524020 2319 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:18:27.524684 kubelet[2319]: I0626 07:18:27.524120 2319 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:18:27.524684 kubelet[2319]: W0626 07:18:27.524578 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.160.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.524684 kubelet[2319]: E0626 07:18:27.524635 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.160.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.527475 kubelet[2319]: E0626 07:18:27.527442 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.160.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-9-ba53898dab?timeout=10s\": dial tcp 64.23.160.249:6443: connect: connection refused" interval="200ms" Jun 26 07:18:27.544019 kubelet[2319]: I0626 07:18:27.542361 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:18:27.553299 kubelet[2319]: I0626 07:18:27.553263 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:18:27.553535 kubelet[2319]: I0626 07:18:27.553520 2319 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:18:27.553635 kubelet[2319]: I0626 07:18:27.553626 2319 kubelet.go:2303] "Starting kubelet main sync loop" Jun 26 07:18:27.553810 kubelet[2319]: E0626 07:18:27.553798 2319 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:18:27.557290 kubelet[2319]: W0626 07:18:27.557250 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.160.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.557490 kubelet[2319]: E0626 07:18:27.557477 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.160.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:27.597726 kubelet[2319]: I0626 07:18:27.597680 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:18:27.597726 kubelet[2319]: I0626 07:18:27.597722 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:18:27.598079 kubelet[2319]: I0626 07:18:27.597762 2319 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:18:27.600251 kubelet[2319]: I0626 07:18:27.600178 2319 policy_none.go:49] "None policy: Start" Jun 26 07:18:27.601316 kubelet[2319]: I0626 07:18:27.601285 2319 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 26 07:18:27.601444 kubelet[2319]: I0626 07:18:27.601335 2319 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:18:27.609560 kubelet[2319]: I0626 07:18:27.609108 2319 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:18:27.609725 kubelet[2319]: I0626 07:18:27.609608 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:18:27.612878 kubelet[2319]: E0626 07:18:27.612840 2319 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-9-ba53898dab\" not found" Jun 26 07:18:27.626484 kubelet[2319]: I0626 07:18:27.626419 2319 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.627237 kubelet[2319]: E0626 07:18:27.627195 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.160.249:6443/api/v1/nodes\": dial tcp 64.23.160.249:6443: connect: connection refused" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.655078 kubelet[2319]: I0626 07:18:27.654368 2319 topology_manager.go:215] "Topology Admit Handler" podUID="15073713f562e2b13ae10bd0eb1acc0d" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.656664 kubelet[2319]: I0626 07:18:27.655904 2319 topology_manager.go:215] "Topology Admit Handler" podUID="a19a26cfb62a0b70238644032adc65e3" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.657569 kubelet[2319]: I0626 07:18:27.657005 2319 topology_manager.go:215] "Topology Admit Handler" podUID="fc10d40a2f76f3d94675b2d30bd1e163" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.733030 kubelet[2319]: E0626 07:18:27.729261 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.160.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-9-ba53898dab?timeout=10s\": dial tcp 64.23.160.249:6443: connect: connection refused" interval="400ms" Jun 26 07:18:27.827585 kubelet[2319]: I0626 07:18:27.827534 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827585 kubelet[2319]: I0626 07:18:27.827588 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827806 kubelet[2319]: I0626 07:18:27.827611 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827806 kubelet[2319]: I0626 07:18:27.827636 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827806 kubelet[2319]: I0626 07:18:27.827661 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827806 kubelet[2319]: I0626 07:18:27.827682 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827806 kubelet[2319]: I0626 07:18:27.827717 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc10d40a2f76f3d94675b2d30bd1e163-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-9-ba53898dab\" (UID: \"fc10d40a2f76f3d94675b2d30bd1e163\") " pod="kube-system/kube-scheduler-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827947 kubelet[2319]: I0626 07:18:27.827745 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.827947 kubelet[2319]: I0626 07:18:27.827772 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.829002 kubelet[2319]: I0626 07:18:27.828935 2319 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.829421 kubelet[2319]: E0626 07:18:27.829402 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.160.249:6443/api/v1/nodes\": dial tcp 64.23.160.249:6443: connect: connection refused" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:27.962403 kubelet[2319]: E0626 07:18:27.962351 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:27.963343 containerd[1585]: time="2024-06-26T07:18:27.963302269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-9-ba53898dab,Uid:15073713f562e2b13ae10bd0eb1acc0d,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:27.966160 kubelet[2319]: E0626 07:18:27.965968 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:27.967375 kubelet[2319]: E0626 07:18:27.966870 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:27.970140 containerd[1585]: time="2024-06-26T07:18:27.970090090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-9-ba53898dab,Uid:fc10d40a2f76f3d94675b2d30bd1e163,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:27.970807 containerd[1585]: time="2024-06-26T07:18:27.970092740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-9-ba53898dab,Uid:a19a26cfb62a0b70238644032adc65e3,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:28.130501 kubelet[2319]: E0626 07:18:28.130351 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.160.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-9-ba53898dab?timeout=10s\": dial tcp 64.23.160.249:6443: connect: connection refused" interval="800ms" Jun 26 07:18:28.231926 kubelet[2319]: I0626 07:18:28.231805 2319 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:28.232396 kubelet[2319]: E0626 07:18:28.232345 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.160.249:6443/api/v1/nodes\": dial tcp 64.23.160.249:6443: connect: connection refused" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:28.528619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799387452.mount: Deactivated successfully. Jun 26 07:18:28.537021 containerd[1585]: time="2024-06-26T07:18:28.535375715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:18:28.537021 containerd[1585]: time="2024-06-26T07:18:28.536569184Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:18:28.538701 containerd[1585]: time="2024-06-26T07:18:28.538362545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:18:28.538701 containerd[1585]: time="2024-06-26T07:18:28.538414810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 26 07:18:28.539796 containerd[1585]: time="2024-06-26T07:18:28.539756002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:18:28.540379 containerd[1585]: time="2024-06-26T07:18:28.540355213Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:18:28.543998 containerd[1585]: time="2024-06-26T07:18:28.542355333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.965608ms" Jun 26 07:18:28.543998 containerd[1585]: time="2024-06-26T07:18:28.543119709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:18:28.544418 containerd[1585]: time="2024-06-26T07:18:28.544383233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:18:28.546210 containerd[1585]: time="2024-06-26T07:18:28.546167126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.403918ms" Jun 26 07:18:28.550309 containerd[1585]: time="2024-06-26T07:18:28.550250115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.201106ms" Jun 26 07:18:28.634200 kubelet[2319]: W0626 07:18:28.634064 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.160.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:28.634200 kubelet[2319]: E0626 07:18:28.634159 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.160.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:28.717173 containerd[1585]: time="2024-06-26T07:18:28.717030778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:28.718161 containerd[1585]: time="2024-06-26T07:18:28.718060459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.718493 containerd[1585]: time="2024-06-26T07:18:28.718372419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:28.718821 containerd[1585]: time="2024-06-26T07:18:28.718430407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.720537 containerd[1585]: time="2024-06-26T07:18:28.719060118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:28.720537 containerd[1585]: time="2024-06-26T07:18:28.719132291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.720537 containerd[1585]: time="2024-06-26T07:18:28.719153842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:28.720537 containerd[1585]: time="2024-06-26T07:18:28.719168926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.727160 containerd[1585]: time="2024-06-26T07:18:28.726370784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:28.727160 containerd[1585]: time="2024-06-26T07:18:28.726439387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.727160 containerd[1585]: time="2024-06-26T07:18:28.726464865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:28.727160 containerd[1585]: time="2024-06-26T07:18:28.726476213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:28.783172 kubelet[2319]: W0626 07:18:28.781416 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.160.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-9-ba53898dab&limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:28.783172 kubelet[2319]: E0626 07:18:28.781483 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.160.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-9-ba53898dab&limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:28.855102 containerd[1585]: time="2024-06-26T07:18:28.853801777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-9-ba53898dab,Uid:a19a26cfb62a0b70238644032adc65e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4aeace06e54b75512eb3f071cec2383d28e4d721bd797cbb788278408b58922d\"" Jun 26 07:18:28.857104 kubelet[2319]: E0626 07:18:28.856817 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:28.862452 containerd[1585]: time="2024-06-26T07:18:28.862173654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-9-ba53898dab,Uid:15073713f562e2b13ae10bd0eb1acc0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b190552a652f68bd19943db8f2af99ff9dc778b06886b1ff5c460af8f3b61786\"" Jun 26 07:18:28.863773 kubelet[2319]: E0626 07:18:28.863570 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:28.864481 containerd[1585]: time="2024-06-26T07:18:28.863222255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-9-ba53898dab,Uid:fc10d40a2f76f3d94675b2d30bd1e163,Namespace:kube-system,Attempt:0,} returns sandbox id \"b02a0293ebf54b6dc37519ee546d21518f69191b4386bee8a03e230ecb6bfeab\"" Jun 26 07:18:28.865691 kubelet[2319]: E0626 07:18:28.865670 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:28.866717 containerd[1585]: time="2024-06-26T07:18:28.866656740Z" level=info msg="CreateContainer within sandbox \"4aeace06e54b75512eb3f071cec2383d28e4d721bd797cbb788278408b58922d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 26 07:18:28.869273 containerd[1585]: time="2024-06-26T07:18:28.869206844Z" level=info msg="CreateContainer within sandbox \"b190552a652f68bd19943db8f2af99ff9dc778b06886b1ff5c460af8f3b61786\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 26 07:18:28.872476 containerd[1585]: time="2024-06-26T07:18:28.872375561Z" level=info msg="CreateContainer within sandbox \"b02a0293ebf54b6dc37519ee546d21518f69191b4386bee8a03e230ecb6bfeab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 26 07:18:28.886651 containerd[1585]: time="2024-06-26T07:18:28.886512100Z" level=info msg="CreateContainer within sandbox \"4aeace06e54b75512eb3f071cec2383d28e4d721bd797cbb788278408b58922d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c594dc5d9b05966534144dc2073690bb26d0c882863edc8664146240dc609766\"" Jun 26 07:18:28.888033 containerd[1585]: time="2024-06-26T07:18:28.887853220Z" level=info msg="StartContainer for \"c594dc5d9b05966534144dc2073690bb26d0c882863edc8664146240dc609766\"" Jun 26 07:18:28.897331 containerd[1585]: time="2024-06-26T07:18:28.897284885Z" level=info msg="CreateContainer within sandbox \"b190552a652f68bd19943db8f2af99ff9dc778b06886b1ff5c460af8f3b61786\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34f957d64c7c7394edc69923e70e728790ebfb9b4f55e09c8ac19d2178584f58\"" Jun 26 07:18:28.898479 containerd[1585]: time="2024-06-26T07:18:28.898324897Z" level=info msg="StartContainer for \"34f957d64c7c7394edc69923e70e728790ebfb9b4f55e09c8ac19d2178584f58\"" Jun 26 07:18:28.901182 containerd[1585]: time="2024-06-26T07:18:28.901109420Z" level=info msg="CreateContainer within sandbox \"b02a0293ebf54b6dc37519ee546d21518f69191b4386bee8a03e230ecb6bfeab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ee678591af71a1ac05c765897a674f6915a18c5cced79c1d45730646f637720\"" Jun 26 07:18:28.903104 containerd[1585]: time="2024-06-26T07:18:28.901833602Z" level=info msg="StartContainer for \"9ee678591af71a1ac05c765897a674f6915a18c5cced79c1d45730646f637720\"" Jun 26 07:18:28.931120 kubelet[2319]: E0626 07:18:28.931052 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.160.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-9-ba53898dab?timeout=10s\": dial tcp 64.23.160.249:6443: connect: connection refused" interval="1.6s" Jun 26 07:18:28.933720 kubelet[2319]: W0626 07:18:28.933608 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.160.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:28.933720 kubelet[2319]: E0626 07:18:28.933673 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.160.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:29.037042 kubelet[2319]: I0626 07:18:29.036407 2319 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:29.038560 kubelet[2319]: W0626 07:18:29.038481 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.160.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:29.041186 kubelet[2319]: E0626 07:18:29.041031 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.160.249:6443/api/v1/nodes\": dial tcp 64.23.160.249:6443: connect: connection refused" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:29.041186 kubelet[2319]: E0626 07:18:29.038808 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.160.249:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.160.249:6443: connect: connection refused Jun 26 07:18:29.087543 containerd[1585]: time="2024-06-26T07:18:29.087495326Z" level=info msg="StartContainer for \"9ee678591af71a1ac05c765897a674f6915a18c5cced79c1d45730646f637720\" returns successfully" Jun 26 07:18:29.116058 containerd[1585]: time="2024-06-26T07:18:29.115285081Z" level=info msg="StartContainer for \"34f957d64c7c7394edc69923e70e728790ebfb9b4f55e09c8ac19d2178584f58\" returns successfully" Jun 26 07:18:29.125477 containerd[1585]: time="2024-06-26T07:18:29.125029572Z" level=info msg="StartContainer for \"c594dc5d9b05966534144dc2073690bb26d0c882863edc8664146240dc609766\" returns successfully" Jun 26 07:18:29.604001 kubelet[2319]: E0626 07:18:29.602949 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:29.608583 kubelet[2319]: E0626 07:18:29.608325 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:29.610926 kubelet[2319]: E0626 07:18:29.610897 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:30.612006 kubelet[2319]: E0626 07:18:30.611590 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:30.643506 kubelet[2319]: I0626 07:18:30.643039 2319 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:31.371012 kubelet[2319]: I0626 07:18:31.369972 2319 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:31.502494 kubelet[2319]: I0626 07:18:31.502435 2319 apiserver.go:52] "Watching apiserver" Jun 26 07:18:31.525194 kubelet[2319]: I0626 07:18:31.525065 2319 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:18:31.865466 kubelet[2319]: E0626 07:18:31.865283 2319 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:31.865934 kubelet[2319]: E0626 07:18:31.865799 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:32.281048 kubelet[2319]: W0626 07:18:32.280212 2319 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:18:32.281879 kubelet[2319]: E0626 07:18:32.281713 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:32.616635 kubelet[2319]: E0626 07:18:32.616119 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:34.274193 systemd[1]: Reloading requested from client PID 2591 ('systemctl') (unit session-7.scope)... Jun 26 07:18:34.274212 systemd[1]: Reloading... Jun 26 07:18:34.360020 zram_generator::config[2625]: No configuration found. Jun 26 07:18:34.547214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:18:34.653890 systemd[1]: Reloading finished in 378 ms. Jun 26 07:18:34.686560 kubelet[2319]: I0626 07:18:34.686304 2319 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:18:34.686707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:34.697669 systemd[1]: kubelet.service: Deactivated successfully. Jun 26 07:18:34.698211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:34.707455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:18:34.857330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:18:34.870727 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:18:34.961287 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:18:34.961287 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:18:34.961287 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:18:34.962861 kubelet[2689]: I0626 07:18:34.961588 2689 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:18:34.967100 kubelet[2689]: I0626 07:18:34.966616 2689 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 26 07:18:34.967100 kubelet[2689]: I0626 07:18:34.966644 2689 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:18:34.967100 kubelet[2689]: I0626 07:18:34.966835 2689 server.go:895] "Client rotation is on, will bootstrap in background" Jun 26 07:18:34.969180 kubelet[2689]: I0626 07:18:34.969034 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 26 07:18:34.970563 kubelet[2689]: I0626 07:18:34.970522 2689 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:18:34.986154 kubelet[2689]: I0626 07:18:34.986065 2689 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:18:34.987317 kubelet[2689]: I0626 07:18:34.986862 2689 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:18:34.987317 kubelet[2689]: I0626 07:18:34.987108 2689 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:18:34.987317 kubelet[2689]: I0626 07:18:34.987138 2689 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:18:34.987317 kubelet[2689]: I0626 07:18:34.987147 2689 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:18:34.987317 kubelet[2689]: I0626 07:18:34.987202 2689 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:18:34.987677 kubelet[2689]: I0626 07:18:34.987657 2689 kubelet.go:393] "Attempting to sync node with API server" Jun 26 07:18:34.988295 kubelet[2689]: I0626 07:18:34.988272 2689 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:18:34.988434 kubelet[2689]: I0626 07:18:34.988422 2689 kubelet.go:309] "Adding apiserver pod source" Jun 26 07:18:34.988604 sudo[2702]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 26 07:18:34.989587 kubelet[2689]: I0626 07:18:34.989565 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:18:34.990849 sudo[2702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 26 07:18:34.998057 kubelet[2689]: I0626 07:18:34.998025 2689 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:18:35.000000 kubelet[2689]: I0626 07:18:34.999829 2689 server.go:1232] "Started kubelet" Jun 26 07:18:35.004021 kubelet[2689]: E0626 07:18:35.003368 2689 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 26 07:18:35.004021 kubelet[2689]: E0626 07:18:35.003416 2689 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:18:35.006163 kubelet[2689]: I0626 07:18:35.006112 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:18:35.017529 kubelet[2689]: I0626 07:18:35.015086 2689 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:18:35.017867 kubelet[2689]: I0626 07:18:35.017831 2689 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:18:35.019677 kubelet[2689]: I0626 07:18:35.019644 2689 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:18:35.019852 kubelet[2689]: I0626 07:18:35.019831 2689 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:18:35.023221 kubelet[2689]: I0626 07:18:35.023188 2689 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 26 07:18:35.024390 kubelet[2689]: I0626 07:18:35.024358 2689 server.go:462] "Adding debug handlers to kubelet server" Jun 26 07:18:35.025414 kubelet[2689]: I0626 07:18:35.025384 2689 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:18:35.058510 kubelet[2689]: I0626 07:18:35.057743 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:18:35.071194 kubelet[2689]: I0626 07:18:35.071159 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:18:35.071392 kubelet[2689]: I0626 07:18:35.071379 2689 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:18:35.071505 kubelet[2689]: I0626 07:18:35.071468 2689 kubelet.go:2303] "Starting kubelet main sync loop" Jun 26 07:18:35.071646 kubelet[2689]: E0626 07:18:35.071632 2689 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:18:35.135164 kubelet[2689]: I0626 07:18:35.135035 2689 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.149489 kubelet[2689]: I0626 07:18:35.149221 2689 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.149489 kubelet[2689]: I0626 07:18:35.149385 2689 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.171850 kubelet[2689]: E0626 07:18:35.171790 2689 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 26 07:18:35.212252 kubelet[2689]: I0626 07:18:35.212189 2689 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:18:35.212252 kubelet[2689]: I0626 07:18:35.212258 2689 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:18:35.212482 kubelet[2689]: I0626 07:18:35.212286 2689 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:18:35.212512 kubelet[2689]: I0626 07:18:35.212507 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 26 07:18:35.212929 kubelet[2689]: I0626 07:18:35.212543 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 26 07:18:35.212929 kubelet[2689]: I0626 07:18:35.212558 2689 policy_none.go:49] "None policy: Start" Jun 26 07:18:35.218000 kubelet[2689]: I0626 07:18:35.217486 2689 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 26 07:18:35.218000 kubelet[2689]: I0626 07:18:35.217524 2689 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:18:35.218000 kubelet[2689]: I0626 07:18:35.217782 2689 state_mem.go:75] "Updated machine memory state" Jun 26 07:18:35.222268 kubelet[2689]: I0626 07:18:35.221777 2689 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:18:35.227060 kubelet[2689]: I0626 07:18:35.226378 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:18:35.373339 kubelet[2689]: I0626 07:18:35.372897 2689 topology_manager.go:215] "Topology Admit Handler" podUID="15073713f562e2b13ae10bd0eb1acc0d" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.373339 kubelet[2689]: I0626 07:18:35.373038 2689 topology_manager.go:215] "Topology Admit Handler" podUID="a19a26cfb62a0b70238644032adc65e3" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.373339 kubelet[2689]: I0626 07:18:35.373090 2689 topology_manager.go:215] "Topology Admit Handler" podUID="fc10d40a2f76f3d94675b2d30bd1e163" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.385036 kubelet[2689]: W0626 07:18:35.384547 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:18:35.385036 kubelet[2689]: W0626 07:18:35.384625 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:18:35.385036 kubelet[2689]: E0626 07:18:35.384706 2689 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" already exists" pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.385036 kubelet[2689]: W0626 07:18:35.384588 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:18:35.430792 kubelet[2689]: I0626 07:18:35.430189 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.430792 kubelet[2689]: I0626 07:18:35.430257 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc10d40a2f76f3d94675b2d30bd1e163-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-9-ba53898dab\" (UID: \"fc10d40a2f76f3d94675b2d30bd1e163\") " pod="kube-system/kube-scheduler-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.430792 kubelet[2689]: I0626 07:18:35.430320 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.430792 kubelet[2689]: I0626 07:18:35.430352 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.430792 kubelet[2689]: I0626 07:18:35.430452 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15073713f562e2b13ae10bd0eb1acc0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-9-ba53898dab\" (UID: \"15073713f562e2b13ae10bd0eb1acc0d\") " pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.431099 kubelet[2689]: I0626 07:18:35.430485 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.431099 kubelet[2689]: I0626 07:18:35.430516 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.431099 kubelet[2689]: I0626 07:18:35.430546 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.431099 kubelet[2689]: I0626 07:18:35.430592 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a19a26cfb62a0b70238644032adc65e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-9-ba53898dab\" (UID: \"a19a26cfb62a0b70238644032adc65e3\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" Jun 26 07:18:35.688652 kubelet[2689]: E0626 07:18:35.686873 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:35.688652 kubelet[2689]: E0626 07:18:35.687182 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:35.688652 kubelet[2689]: E0626 07:18:35.687533 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:35.801146 sudo[2702]: pam_unix(sudo:session): session closed for user root Jun 26 07:18:35.990627 kubelet[2689]: I0626 07:18:35.990453 2689 apiserver.go:52] "Watching apiserver" Jun 26 07:18:36.021177 kubelet[2689]: I0626 07:18:36.021110 2689 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:18:36.104009 kubelet[2689]: E0626 07:18:36.101394 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:36.104491 kubelet[2689]: E0626 07:18:36.104460 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:36.105712 kubelet[2689]: E0626 07:18:36.105688 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:36.146942 kubelet[2689]: I0626 07:18:36.145411 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-9-ba53898dab" podStartSLOduration=1.1443387999999999 podCreationTimestamp="2024-06-26 07:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:36.133318066 +0000 UTC m=+1.256055349" watchObservedRunningTime="2024-06-26 07:18:36.1443388 +0000 UTC m=+1.267076076" Jun 26 07:18:36.156166 kubelet[2689]: I0626 07:18:36.155989 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-9-ba53898dab" podStartSLOduration=4.155938726 podCreationTimestamp="2024-06-26 07:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:36.145558756 +0000 UTC m=+1.268296024" watchObservedRunningTime="2024-06-26 07:18:36.155938726 +0000 UTC m=+1.278676000" Jun 26 07:18:36.156166 kubelet[2689]: I0626 07:18:36.156094 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-9-ba53898dab" podStartSLOduration=1.15607932 podCreationTimestamp="2024-06-26 07:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:36.156037074 +0000 UTC m=+1.278774353" watchObservedRunningTime="2024-06-26 07:18:36.15607932 +0000 UTC m=+1.278816602" Jun 26 07:18:37.104350 kubelet[2689]: E0626 07:18:37.104258 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:37.106493 kubelet[2689]: E0626 07:18:37.106393 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:37.541516 sudo[1803]: pam_unix(sudo:session): session closed for user root Jun 26 07:18:37.547865 sshd[1797]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:37.552919 systemd[1]: sshd@6-64.23.160.249:22-147.75.109.163:59418.service: Deactivated successfully. Jun 26 07:18:37.559616 systemd[1]: session-7.scope: Deactivated successfully. Jun 26 07:18:37.563199 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Jun 26 07:18:37.564667 systemd-logind[1553]: Removed session 7. Jun 26 07:18:38.213381 kubelet[2689]: E0626 07:18:38.213274 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:38.627252 kubelet[2689]: E0626 07:18:38.627045 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:43.814893 update_engine[1562]: I0626 07:18:43.814100 1562 update_attempter.cc:509] Updating boot flags... Jun 26 07:18:43.851039 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2766) Jun 26 07:18:43.914255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2768) Jun 26 07:18:43.968075 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2768) Jun 26 07:18:46.467192 kubelet[2689]: E0626 07:18:46.466101 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:48.233846 kubelet[2689]: E0626 07:18:48.233801 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:48.538580 kubelet[2689]: I0626 07:18:48.537863 2689 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 26 07:18:48.539258 containerd[1585]: time="2024-06-26T07:18:48.538673119Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 26 07:18:48.541250 kubelet[2689]: I0626 07:18:48.540541 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 26 07:18:48.639009 kubelet[2689]: E0626 07:18:48.638778 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.134693 kubelet[2689]: E0626 07:18:49.134257 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.335021 kubelet[2689]: I0626 07:18:49.331799 2689 topology_manager.go:215] "Topology Admit Handler" podUID="7d491493-f845-4bf4-be8a-149b9944a81e" podNamespace="kube-system" podName="kube-proxy-hv6nw" Jun 26 07:18:49.361621 kubelet[2689]: I0626 07:18:49.361577 2689 topology_manager.go:215] "Topology Admit Handler" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" podNamespace="kube-system" podName="cilium-dhcdw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442550 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-etc-cni-netd\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442689 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cni-path\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442739 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d491493-f845-4bf4-be8a-149b9944a81e-lib-modules\") pod \"kube-proxy-hv6nw\" (UID: \"7d491493-f845-4bf4-be8a-149b9944a81e\") " pod="kube-system/kube-proxy-hv6nw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442774 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d491493-f845-4bf4-be8a-149b9944a81e-xtables-lock\") pod \"kube-proxy-hv6nw\" (UID: \"7d491493-f845-4bf4-be8a-149b9944a81e\") " pod="kube-system/kube-proxy-hv6nw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442807 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-xtables-lock\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.444591 kubelet[2689]: I0626 07:18:49.442838 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-config-path\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.442871 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-bpf-maps\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.442902 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-kernel\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.442933 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d491493-f845-4bf4-be8a-149b9944a81e-kube-proxy\") pod \"kube-proxy-hv6nw\" (UID: \"7d491493-f845-4bf4-be8a-149b9944a81e\") " pod="kube-system/kube-proxy-hv6nw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.442962 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hostproc\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.443019 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-cgroup\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.445110 kubelet[2689]: I0626 07:18:49.443076 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-lib-modules\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.447154 kubelet[2689]: I0626 07:18:49.443113 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-clustermesh-secrets\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.447154 kubelet[2689]: I0626 07:18:49.443146 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-net\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.447154 kubelet[2689]: I0626 07:18:49.443179 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hubble-tls\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.447154 kubelet[2689]: I0626 07:18:49.447041 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g589c\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-kube-api-access-g589c\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.447343 kubelet[2689]: I0626 07:18:49.447183 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwwn\" (UniqueName: \"kubernetes.io/projected/7d491493-f845-4bf4-be8a-149b9944a81e-kube-api-access-tmwwn\") pod \"kube-proxy-hv6nw\" (UID: \"7d491493-f845-4bf4-be8a-149b9944a81e\") " pod="kube-system/kube-proxy-hv6nw" Jun 26 07:18:49.447343 kubelet[2689]: I0626 07:18:49.447255 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-run\") pod \"cilium-dhcdw\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " pod="kube-system/cilium-dhcdw" Jun 26 07:18:49.566419 kubelet[2689]: I0626 07:18:49.560592 2689 topology_manager.go:215] "Topology Admit Handler" podUID="b6f61c72-2103-49ca-a367-934173367795" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-vvr6s" Jun 26 07:18:49.648894 kubelet[2689]: I0626 07:18:49.648831 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvzt\" (UniqueName: \"kubernetes.io/projected/b6f61c72-2103-49ca-a367-934173367795-kube-api-access-8fvzt\") pod \"cilium-operator-6bc8ccdb58-vvr6s\" (UID: \"b6f61c72-2103-49ca-a367-934173367795\") " pod="kube-system/cilium-operator-6bc8ccdb58-vvr6s" Jun 26 07:18:49.649404 kubelet[2689]: I0626 07:18:49.649378 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f61c72-2103-49ca-a367-934173367795-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-vvr6s\" (UID: \"b6f61c72-2103-49ca-a367-934173367795\") " pod="kube-system/cilium-operator-6bc8ccdb58-vvr6s" Jun 26 07:18:49.666929 kubelet[2689]: E0626 07:18:49.666499 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.667781 containerd[1585]: time="2024-06-26T07:18:49.667741930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hv6nw,Uid:7d491493-f845-4bf4-be8a-149b9944a81e,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:49.670963 kubelet[2689]: E0626 07:18:49.670850 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.674978 containerd[1585]: time="2024-06-26T07:18:49.674732191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhcdw,Uid:60c1ade2-aff0-4986-8665-b46fb5cc8ed1,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:49.729334 containerd[1585]: time="2024-06-26T07:18:49.728808095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:49.729334 containerd[1585]: time="2024-06-26T07:18:49.728908544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.729334 containerd[1585]: time="2024-06-26T07:18:49.728963278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:49.729334 containerd[1585]: time="2024-06-26T07:18:49.729019176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.736619 containerd[1585]: time="2024-06-26T07:18:49.736415371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:49.738020 containerd[1585]: time="2024-06-26T07:18:49.736682897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.738020 containerd[1585]: time="2024-06-26T07:18:49.736713947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:49.738020 containerd[1585]: time="2024-06-26T07:18:49.736728267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.822284 containerd[1585]: time="2024-06-26T07:18:49.822220687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhcdw,Uid:60c1ade2-aff0-4986-8665-b46fb5cc8ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\"" Jun 26 07:18:49.827580 kubelet[2689]: E0626 07:18:49.827436 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.830159 containerd[1585]: time="2024-06-26T07:18:49.830102480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 26 07:18:49.838765 containerd[1585]: time="2024-06-26T07:18:49.838359551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hv6nw,Uid:7d491493-f845-4bf4-be8a-149b9944a81e,Namespace:kube-system,Attempt:0,} returns sandbox id \"683b8e1a1a614b345423f3e32e12b23a418f63b74e6fb791812f98d659cc6932\"" Jun 26 07:18:49.840701 kubelet[2689]: E0626 07:18:49.840659 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.844795 containerd[1585]: time="2024-06-26T07:18:49.843360298Z" level=info msg="CreateContainer within sandbox \"683b8e1a1a614b345423f3e32e12b23a418f63b74e6fb791812f98d659cc6932\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 26 07:18:49.864454 containerd[1585]: time="2024-06-26T07:18:49.864366833Z" level=info msg="CreateContainer within sandbox \"683b8e1a1a614b345423f3e32e12b23a418f63b74e6fb791812f98d659cc6932\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9776639ed254ae002c05c99ceb3f12fe94bd047eaeaeff4cd4de337231e7cf8\"" Jun 26 07:18:49.868249 containerd[1585]: time="2024-06-26T07:18:49.866380345Z" level=info msg="StartContainer for \"d9776639ed254ae002c05c99ceb3f12fe94bd047eaeaeff4cd4de337231e7cf8\"" Jun 26 07:18:49.889499 kubelet[2689]: E0626 07:18:49.889165 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:49.891894 containerd[1585]: time="2024-06-26T07:18:49.891382670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-vvr6s,Uid:b6f61c72-2103-49ca-a367-934173367795,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:49.942537 containerd[1585]: time="2024-06-26T07:18:49.941532649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:49.942537 containerd[1585]: time="2024-06-26T07:18:49.941599995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.942537 containerd[1585]: time="2024-06-26T07:18:49.941614767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:49.942537 containerd[1585]: time="2024-06-26T07:18:49.941624683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:49.982219 containerd[1585]: time="2024-06-26T07:18:49.982053872Z" level=info msg="StartContainer for \"d9776639ed254ae002c05c99ceb3f12fe94bd047eaeaeff4cd4de337231e7cf8\" returns successfully" Jun 26 07:18:50.065086 containerd[1585]: time="2024-06-26T07:18:50.064093037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-vvr6s,Uid:b6f61c72-2103-49ca-a367-934173367795,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\"" Jun 26 07:18:50.066493 kubelet[2689]: E0626 07:18:50.066187 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:50.148548 kubelet[2689]: E0626 07:18:50.148436 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:50.160323 kubelet[2689]: I0626 07:18:50.157430 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hv6nw" podStartSLOduration=1.157376981 podCreationTimestamp="2024-06-26 07:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:50.157247934 +0000 UTC m=+15.279985220" watchObservedRunningTime="2024-06-26 07:18:50.157376981 +0000 UTC m=+15.280114264" Jun 26 07:18:54.594338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285835916.mount: Deactivated successfully. Jun 26 07:18:57.101714 containerd[1585]: time="2024-06-26T07:18:57.082460663Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735331" Jun 26 07:18:57.103847 containerd[1585]: time="2024-06-26T07:18:57.102927303Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.272771646s" Jun 26 07:18:57.103847 containerd[1585]: time="2024-06-26T07:18:57.102993007Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 26 07:18:57.106803 containerd[1585]: time="2024-06-26T07:18:57.106461104Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 26 07:18:57.116025 containerd[1585]: time="2024-06-26T07:18:57.115272598Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 26 07:18:57.135821 containerd[1585]: time="2024-06-26T07:18:57.134799868Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:57.136053 containerd[1585]: time="2024-06-26T07:18:57.136020679Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:57.171469 containerd[1585]: time="2024-06-26T07:18:57.171394343Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\"" Jun 26 07:18:57.172621 containerd[1585]: time="2024-06-26T07:18:57.172396536Z" level=info msg="StartContainer for \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\"" Jun 26 07:18:57.294758 containerd[1585]: time="2024-06-26T07:18:57.294702489Z" level=info msg="StartContainer for \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\" returns successfully" Jun 26 07:18:57.373163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034-rootfs.mount: Deactivated successfully. Jun 26 07:18:57.442688 containerd[1585]: time="2024-06-26T07:18:57.419949791Z" level=info msg="shim disconnected" id=ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034 namespace=k8s.io Jun 26 07:18:57.442688 containerd[1585]: time="2024-06-26T07:18:57.442674880Z" level=warning msg="cleaning up after shim disconnected" id=ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034 namespace=k8s.io Jun 26 07:18:57.442688 containerd[1585]: time="2024-06-26T07:18:57.442694972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:58.183905 kubelet[2689]: E0626 07:18:58.183700 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:58.198020 containerd[1585]: time="2024-06-26T07:18:58.193870526Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 26 07:18:58.232317 containerd[1585]: time="2024-06-26T07:18:58.231875233Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\"" Jun 26 07:18:58.235114 containerd[1585]: time="2024-06-26T07:18:58.234675153Z" level=info msg="StartContainer for \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\"" Jun 26 07:18:58.284972 systemd[1]: run-containerd-runc-k8s.io-f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2-runc.cmmx8F.mount: Deactivated successfully. Jun 26 07:18:58.328230 containerd[1585]: time="2024-06-26T07:18:58.327211277Z" level=info msg="StartContainer for \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\" returns successfully" Jun 26 07:18:58.342225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:18:58.343499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:18:58.344159 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:18:58.354938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:18:58.398467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:18:58.419351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2-rootfs.mount: Deactivated successfully. Jun 26 07:18:58.421636 containerd[1585]: time="2024-06-26T07:18:58.421129042Z" level=info msg="shim disconnected" id=f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2 namespace=k8s.io Jun 26 07:18:58.421636 containerd[1585]: time="2024-06-26T07:18:58.421195471Z" level=warning msg="cleaning up after shim disconnected" id=f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2 namespace=k8s.io Jun 26 07:18:58.421636 containerd[1585]: time="2024-06-26T07:18:58.421206658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:59.191572 kubelet[2689]: E0626 07:18:59.191032 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:18:59.205095 containerd[1585]: time="2024-06-26T07:18:59.204893335Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 26 07:18:59.240843 containerd[1585]: time="2024-06-26T07:18:59.240410513Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\"" Jun 26 07:18:59.256531 containerd[1585]: time="2024-06-26T07:18:59.255521746Z" level=info msg="StartContainer for \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\"" Jun 26 07:18:59.404705 containerd[1585]: time="2024-06-26T07:18:59.404607545Z" level=info msg="StartContainer for \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\" returns successfully" Jun 26 07:18:59.491356 containerd[1585]: time="2024-06-26T07:18:59.491242560Z" level=info msg="shim disconnected" id=bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e namespace=k8s.io Jun 26 07:18:59.491356 containerd[1585]: time="2024-06-26T07:18:59.491359672Z" level=warning msg="cleaning up after shim disconnected" id=bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e namespace=k8s.io Jun 26 07:18:59.491651 containerd[1585]: time="2024-06-26T07:18:59.491379191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:59.569043 containerd[1585]: time="2024-06-26T07:18:59.568530054Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:59.570093 containerd[1585]: time="2024-06-26T07:18:59.569831185Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907197" Jun 26 07:18:59.570857 containerd[1585]: time="2024-06-26T07:18:59.570789318Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:59.572622 containerd[1585]: time="2024-06-26T07:18:59.572581832Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.466052257s" Jun 26 07:18:59.572622 containerd[1585]: time="2024-06-26T07:18:59.572624769Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 26 07:18:59.576049 containerd[1585]: time="2024-06-26T07:18:59.575879620Z" level=info msg="CreateContainer within sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 26 07:18:59.585427 containerd[1585]: time="2024-06-26T07:18:59.585351612Z" level=info msg="CreateContainer within sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\"" Jun 26 07:18:59.587250 containerd[1585]: time="2024-06-26T07:18:59.586251612Z" level=info msg="StartContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\"" Jun 26 07:18:59.657935 containerd[1585]: time="2024-06-26T07:18:59.657877470Z" level=info msg="StartContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" returns successfully" Jun 26 07:19:00.204729 kubelet[2689]: E0626 07:19:00.204193 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:00.222641 kubelet[2689]: E0626 07:19:00.222292 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:00.223881 systemd[1]: run-containerd-runc-k8s.io-bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e-runc.mRTL5u.mount: Deactivated successfully. Jun 26 07:19:00.225923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e-rootfs.mount: Deactivated successfully. Jun 26 07:19:00.245722 containerd[1585]: time="2024-06-26T07:19:00.243302322Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 26 07:19:00.263311 kubelet[2689]: I0626 07:19:00.263082 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-vvr6s" podStartSLOduration=1.755020306 podCreationTimestamp="2024-06-26 07:18:49 +0000 UTC" firstStartedPulling="2024-06-26 07:18:50.067145324 +0000 UTC m=+15.189882585" lastFinishedPulling="2024-06-26 07:18:59.573405402 +0000 UTC m=+24.696142667" observedRunningTime="2024-06-26 07:19:00.260051874 +0000 UTC m=+25.382789157" watchObservedRunningTime="2024-06-26 07:19:00.261280388 +0000 UTC m=+25.384017673" Jun 26 07:19:00.323685 containerd[1585]: time="2024-06-26T07:19:00.323591477Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\"" Jun 26 07:19:00.325260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617381433.mount: Deactivated successfully. Jun 26 07:19:00.327212 containerd[1585]: time="2024-06-26T07:19:00.325253516Z" level=info msg="StartContainer for \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\"" Jun 26 07:19:00.565021 containerd[1585]: time="2024-06-26T07:19:00.560370632Z" level=info msg="StartContainer for \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\" returns successfully" Jun 26 07:19:00.628895 containerd[1585]: time="2024-06-26T07:19:00.628637419Z" level=info msg="shim disconnected" id=ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63 namespace=k8s.io Jun 26 07:19:00.628895 containerd[1585]: time="2024-06-26T07:19:00.628881335Z" level=warning msg="cleaning up after shim disconnected" id=ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63 namespace=k8s.io Jun 26 07:19:00.628895 containerd[1585]: time="2024-06-26T07:19:00.628896400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:01.214365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63-rootfs.mount: Deactivated successfully. Jun 26 07:19:01.228449 kubelet[2689]: E0626 07:19:01.227405 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:01.231693 kubelet[2689]: E0626 07:19:01.231632 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:01.239068 containerd[1585]: time="2024-06-26T07:19:01.236921819Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 26 07:19:01.280328 containerd[1585]: time="2024-06-26T07:19:01.280258481Z" level=info msg="CreateContainer within sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\"" Jun 26 07:19:01.285369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665169744.mount: Deactivated successfully. Jun 26 07:19:01.309088 containerd[1585]: time="2024-06-26T07:19:01.294484707Z" level=info msg="StartContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\"" Jun 26 07:19:01.489407 containerd[1585]: time="2024-06-26T07:19:01.489112698Z" level=info msg="StartContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" returns successfully" Jun 26 07:19:01.755136 kubelet[2689]: I0626 07:19:01.754680 2689 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 26 07:19:01.814121 kubelet[2689]: I0626 07:19:01.811596 2689 topology_manager.go:215] "Topology Admit Handler" podUID="ece5d8ef-6449-4f00-8aa2-601ebdd24f8e" podNamespace="kube-system" podName="coredns-5dd5756b68-lrhrr" Jun 26 07:19:01.821046 kubelet[2689]: I0626 07:19:01.818000 2689 topology_manager.go:215] "Topology Admit Handler" podUID="bbfabb79-72af-41f7-8578-ea541b39ab7a" podNamespace="kube-system" podName="coredns-5dd5756b68-67vnr" Jun 26 07:19:01.863429 kubelet[2689]: I0626 07:19:01.863066 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndcj\" (UniqueName: \"kubernetes.io/projected/ece5d8ef-6449-4f00-8aa2-601ebdd24f8e-kube-api-access-qndcj\") pod \"coredns-5dd5756b68-lrhrr\" (UID: \"ece5d8ef-6449-4f00-8aa2-601ebdd24f8e\") " pod="kube-system/coredns-5dd5756b68-lrhrr" Jun 26 07:19:01.869288 kubelet[2689]: I0626 07:19:01.866933 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbfabb79-72af-41f7-8578-ea541b39ab7a-config-volume\") pod \"coredns-5dd5756b68-67vnr\" (UID: \"bbfabb79-72af-41f7-8578-ea541b39ab7a\") " pod="kube-system/coredns-5dd5756b68-67vnr" Jun 26 07:19:01.869288 kubelet[2689]: I0626 07:19:01.869237 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnpx\" (UniqueName: \"kubernetes.io/projected/bbfabb79-72af-41f7-8578-ea541b39ab7a-kube-api-access-cmnpx\") pod \"coredns-5dd5756b68-67vnr\" (UID: \"bbfabb79-72af-41f7-8578-ea541b39ab7a\") " pod="kube-system/coredns-5dd5756b68-67vnr" Jun 26 07:19:01.871025 kubelet[2689]: I0626 07:19:01.870252 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece5d8ef-6449-4f00-8aa2-601ebdd24f8e-config-volume\") pod \"coredns-5dd5756b68-lrhrr\" (UID: \"ece5d8ef-6449-4f00-8aa2-601ebdd24f8e\") " pod="kube-system/coredns-5dd5756b68-lrhrr" Jun 26 07:19:02.136360 kubelet[2689]: E0626 07:19:02.134748 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:02.140097 containerd[1585]: time="2024-06-26T07:19:02.138359724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lrhrr,Uid:ece5d8ef-6449-4f00-8aa2-601ebdd24f8e,Namespace:kube-system,Attempt:0,}" Jun 26 07:19:02.142716 kubelet[2689]: E0626 07:19:02.142262 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:02.144581 containerd[1585]: time="2024-06-26T07:19:02.143505471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-67vnr,Uid:bbfabb79-72af-41f7-8578-ea541b39ab7a,Namespace:kube-system,Attempt:0,}" Jun 26 07:19:02.313627 kubelet[2689]: E0626 07:19:02.312459 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:02.370307 kubelet[2689]: I0626 07:19:02.369056 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dhcdw" podStartSLOduration=6.092116967 podCreationTimestamp="2024-06-26 07:18:49 +0000 UTC" firstStartedPulling="2024-06-26 07:18:49.82933971 +0000 UTC m=+14.952076971" lastFinishedPulling="2024-06-26 07:18:57.104195863 +0000 UTC m=+22.226933140" observedRunningTime="2024-06-26 07:19:02.355966338 +0000 UTC m=+27.478703636" watchObservedRunningTime="2024-06-26 07:19:02.366973136 +0000 UTC m=+27.489710420" Jun 26 07:19:03.313735 kubelet[2689]: E0626 07:19:03.313688 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:04.163467 systemd-networkd[1228]: cilium_host: Link UP Jun 26 07:19:04.163603 systemd-networkd[1228]: cilium_net: Link UP Jun 26 07:19:04.163607 systemd-networkd[1228]: cilium_net: Gained carrier Jun 26 07:19:04.163782 systemd-networkd[1228]: cilium_host: Gained carrier Jun 26 07:19:04.302892 systemd-networkd[1228]: cilium_vxlan: Link UP Jun 26 07:19:04.302900 systemd-networkd[1228]: cilium_vxlan: Gained carrier Jun 26 07:19:04.317575 kubelet[2689]: E0626 07:19:04.317547 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:04.730079 kernel: NET: Registered PF_ALG protocol family Jun 26 07:19:04.731308 systemd-networkd[1228]: cilium_net: Gained IPv6LL Jun 26 07:19:05.115254 systemd-networkd[1228]: cilium_host: Gained IPv6LL Jun 26 07:19:05.499156 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL Jun 26 07:19:05.810789 systemd-networkd[1228]: lxc_health: Link UP Jun 26 07:19:05.819599 systemd-networkd[1228]: lxc_health: Gained carrier Jun 26 07:19:06.311352 systemd-networkd[1228]: lxc5ed1182ee3f7: Link UP Jun 26 07:19:06.314583 kernel: eth0: renamed from tmp2db0d Jun 26 07:19:06.326636 systemd-networkd[1228]: lxc5ed1182ee3f7: Gained carrier Jun 26 07:19:06.372660 systemd-networkd[1228]: lxcb16b1fa8844d: Link UP Jun 26 07:19:06.376895 kernel: eth0: renamed from tmpc7feb Jun 26 07:19:06.386101 systemd-networkd[1228]: lxcb16b1fa8844d: Gained carrier Jun 26 07:19:07.551122 systemd-networkd[1228]: lxcb16b1fa8844d: Gained IPv6LL Jun 26 07:19:07.674248 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jun 26 07:19:07.682036 kubelet[2689]: E0626 07:19:07.681792 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:08.122248 systemd-networkd[1228]: lxc5ed1182ee3f7: Gained IPv6LL Jun 26 07:19:08.351023 kubelet[2689]: E0626 07:19:08.350956 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:09.352826 kubelet[2689]: E0626 07:19:09.352601 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:12.814547 containerd[1585]: time="2024-06-26T07:19:12.813537217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:19:12.814547 containerd[1585]: time="2024-06-26T07:19:12.814154776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:12.815533 containerd[1585]: time="2024-06-26T07:19:12.814209002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:19:12.815533 containerd[1585]: time="2024-06-26T07:19:12.814244407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:12.959227 containerd[1585]: time="2024-06-26T07:19:12.957243411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:19:12.959227 containerd[1585]: time="2024-06-26T07:19:12.957315458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:12.959227 containerd[1585]: time="2024-06-26T07:19:12.957331250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:19:12.959227 containerd[1585]: time="2024-06-26T07:19:12.957340606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:13.077655 containerd[1585]: time="2024-06-26T07:19:13.074741331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-67vnr,Uid:bbfabb79-72af-41f7-8578-ea541b39ab7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2db0d5bc3c89b06f6b11f401db3f6d2dcb309f17ce26bccaea1469941dd71dff\"" Jun 26 07:19:13.078533 kubelet[2689]: E0626 07:19:13.078492 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:13.089494 containerd[1585]: time="2024-06-26T07:19:13.089056904Z" level=info msg="CreateContainer within sandbox \"2db0d5bc3c89b06f6b11f401db3f6d2dcb309f17ce26bccaea1469941dd71dff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:19:13.118655 containerd[1585]: time="2024-06-26T07:19:13.118487678Z" level=info msg="CreateContainer within sandbox \"2db0d5bc3c89b06f6b11f401db3f6d2dcb309f17ce26bccaea1469941dd71dff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"682fa7552b5989bff9bbe5f3a4d869f62c5c3e222ba11ed1298b902dfbb57183\"" Jun 26 07:19:13.121596 containerd[1585]: time="2024-06-26T07:19:13.121168958Z" level=info msg="StartContainer for \"682fa7552b5989bff9bbe5f3a4d869f62c5c3e222ba11ed1298b902dfbb57183\"" Jun 26 07:19:13.149693 containerd[1585]: time="2024-06-26T07:19:13.149238572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lrhrr,Uid:ece5d8ef-6449-4f00-8aa2-601ebdd24f8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7feb548bf5b8c7fc30fe451d0648334bc64b19929a3f769f22e12729484c841\"" Jun 26 07:19:13.151248 kubelet[2689]: E0626 07:19:13.150953 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:13.157126 containerd[1585]: time="2024-06-26T07:19:13.156893819Z" level=info msg="CreateContainer within sandbox \"c7feb548bf5b8c7fc30fe451d0648334bc64b19929a3f769f22e12729484c841\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:19:13.177788 containerd[1585]: time="2024-06-26T07:19:13.177511539Z" level=info msg="CreateContainer within sandbox \"c7feb548bf5b8c7fc30fe451d0648334bc64b19929a3f769f22e12729484c841\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c213e52516b6b9ca317b5f9c08664ac8c784923240b248175c9bc6bc8ca4269\"" Jun 26 07:19:13.179568 containerd[1585]: time="2024-06-26T07:19:13.179120625Z" level=info msg="StartContainer for \"2c213e52516b6b9ca317b5f9c08664ac8c784923240b248175c9bc6bc8ca4269\"" Jun 26 07:19:13.277076 containerd[1585]: time="2024-06-26T07:19:13.276973095Z" level=info msg="StartContainer for \"682fa7552b5989bff9bbe5f3a4d869f62c5c3e222ba11ed1298b902dfbb57183\" returns successfully" Jun 26 07:19:13.300936 containerd[1585]: time="2024-06-26T07:19:13.300532405Z" level=info msg="StartContainer for \"2c213e52516b6b9ca317b5f9c08664ac8c784923240b248175c9bc6bc8ca4269\" returns successfully" Jun 26 07:19:13.368300 kubelet[2689]: E0626 07:19:13.367691 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:13.377293 kubelet[2689]: E0626 07:19:13.376502 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:13.394941 kubelet[2689]: I0626 07:19:13.394825 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lrhrr" podStartSLOduration=24.394777809 podCreationTimestamp="2024-06-26 07:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:19:13.393497843 +0000 UTC m=+38.516235126" watchObservedRunningTime="2024-06-26 07:19:13.394777809 +0000 UTC m=+38.517515092" Jun 26 07:19:13.826603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340387235.mount: Deactivated successfully. Jun 26 07:19:14.383041 kubelet[2689]: E0626 07:19:14.381076 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:14.383041 kubelet[2689]: E0626 07:19:14.381433 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:14.400400 kubelet[2689]: I0626 07:19:14.400241 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-67vnr" podStartSLOduration=25.400189042 podCreationTimestamp="2024-06-26 07:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:19:13.430267509 +0000 UTC m=+38.553004793" watchObservedRunningTime="2024-06-26 07:19:14.400189042 +0000 UTC m=+39.522926328" Jun 26 07:19:15.383773 kubelet[2689]: E0626 07:19:15.383549 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:15.383773 kubelet[2689]: E0626 07:19:15.383694 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:19:25.116748 systemd[1]: Started sshd@7-64.23.160.249:22-147.75.109.163:42234.service - OpenSSH per-connection server daemon (147.75.109.163:42234). Jun 26 07:19:25.183755 sshd[4081]: Accepted publickey for core from 147.75.109.163 port 42234 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:25.188430 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:25.194648 systemd-logind[1553]: New session 8 of user core. Jun 26 07:19:25.201579 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 26 07:19:25.804316 sshd[4081]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:25.809396 systemd[1]: sshd@7-64.23.160.249:22-147.75.109.163:42234.service: Deactivated successfully. Jun 26 07:19:25.815253 systemd[1]: session-8.scope: Deactivated successfully. Jun 26 07:19:25.816104 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Jun 26 07:19:25.817910 systemd-logind[1553]: Removed session 8. Jun 26 07:19:30.820407 systemd[1]: Started sshd@8-64.23.160.249:22-147.75.109.163:35726.service - OpenSSH per-connection server daemon (147.75.109.163:35726). Jun 26 07:19:30.874168 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 35726 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:30.876332 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:30.882964 systemd-logind[1553]: New session 9 of user core. Jun 26 07:19:30.889811 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 26 07:19:31.040900 sshd[4096]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:31.046856 systemd[1]: sshd@8-64.23.160.249:22-147.75.109.163:35726.service: Deactivated successfully. Jun 26 07:19:31.050587 systemd[1]: session-9.scope: Deactivated successfully. Jun 26 07:19:31.051913 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Jun 26 07:19:31.053385 systemd-logind[1553]: Removed session 9. Jun 26 07:19:36.055529 systemd[1]: Started sshd@9-64.23.160.249:22-147.75.109.163:46124.service - OpenSSH per-connection server daemon (147.75.109.163:46124). Jun 26 07:19:36.100455 sshd[4113]: Accepted publickey for core from 147.75.109.163 port 46124 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:36.102501 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:36.109052 systemd-logind[1553]: New session 10 of user core. Jun 26 07:19:36.116447 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 26 07:19:36.256007 sshd[4113]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:36.260855 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Jun 26 07:19:36.262498 systemd[1]: sshd@9-64.23.160.249:22-147.75.109.163:46124.service: Deactivated successfully. Jun 26 07:19:36.269196 systemd[1]: session-10.scope: Deactivated successfully. Jun 26 07:19:36.271020 systemd-logind[1553]: Removed session 10. Jun 26 07:19:38.668571 systemd[1]: Started sshd@10-64.23.160.249:22-174.4.72.105:55594.service - OpenSSH per-connection server daemon (174.4.72.105:55594). Jun 26 07:19:38.795672 sshd[4127]: Connection closed by 174.4.72.105 port 55594 [preauth] Jun 26 07:19:38.796592 systemd[1]: sshd@10-64.23.160.249:22-174.4.72.105:55594.service: Deactivated successfully. Jun 26 07:19:41.269644 systemd[1]: Started sshd@11-64.23.160.249:22-147.75.109.163:46128.service - OpenSSH per-connection server daemon (147.75.109.163:46128). Jun 26 07:19:41.328423 sshd[4132]: Accepted publickey for core from 147.75.109.163 port 46128 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:41.331661 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:41.341530 systemd-logind[1553]: New session 11 of user core. Jun 26 07:19:41.355419 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 26 07:19:41.551096 sshd[4132]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:41.569972 systemd[1]: Started sshd@12-64.23.160.249:22-147.75.109.163:46144.service - OpenSSH per-connection server daemon (147.75.109.163:46144). Jun 26 07:19:41.572527 systemd[1]: sshd@11-64.23.160.249:22-147.75.109.163:46128.service: Deactivated successfully. Jun 26 07:19:41.577882 systemd[1]: session-11.scope: Deactivated successfully. Jun 26 07:19:41.583472 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Jun 26 07:19:41.587469 systemd-logind[1553]: Removed session 11. Jun 26 07:19:41.669039 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 46144 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:41.673671 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:41.683858 systemd-logind[1553]: New session 12 of user core. Jun 26 07:19:41.692410 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 26 07:19:43.181179 sshd[4145]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:43.205450 systemd[1]: Started sshd@13-64.23.160.249:22-147.75.109.163:46158.service - OpenSSH per-connection server daemon (147.75.109.163:46158). Jun 26 07:19:43.227561 systemd[1]: sshd@12-64.23.160.249:22-147.75.109.163:46144.service: Deactivated successfully. Jun 26 07:19:43.254463 systemd[1]: session-12.scope: Deactivated successfully. Jun 26 07:19:43.266638 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Jun 26 07:19:43.279392 systemd-logind[1553]: Removed session 12. Jun 26 07:19:43.314429 sshd[4156]: Accepted publickey for core from 147.75.109.163 port 46158 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:43.318545 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:43.334361 systemd-logind[1553]: New session 13 of user core. Jun 26 07:19:43.339546 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 26 07:19:43.606376 sshd[4156]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:43.612765 systemd[1]: sshd@13-64.23.160.249:22-147.75.109.163:46158.service: Deactivated successfully. Jun 26 07:19:43.618870 systemd[1]: session-13.scope: Deactivated successfully. Jun 26 07:19:43.622303 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Jun 26 07:19:43.624930 systemd-logind[1553]: Removed session 13. Jun 26 07:19:48.615759 systemd[1]: Started sshd@14-64.23.160.249:22-147.75.109.163:44610.service - OpenSSH per-connection server daemon (147.75.109.163:44610). Jun 26 07:19:48.664917 sshd[4173]: Accepted publickey for core from 147.75.109.163 port 44610 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:48.665726 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:48.672770 systemd-logind[1553]: New session 14 of user core. Jun 26 07:19:48.678568 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 26 07:19:48.819213 sshd[4173]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:48.823512 systemd[1]: sshd@14-64.23.160.249:22-147.75.109.163:44610.service: Deactivated successfully. Jun 26 07:19:48.828699 systemd[1]: session-14.scope: Deactivated successfully. Jun 26 07:19:48.829994 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Jun 26 07:19:48.831325 systemd-logind[1553]: Removed session 14. Jun 26 07:19:53.830508 systemd[1]: Started sshd@15-64.23.160.249:22-147.75.109.163:44618.service - OpenSSH per-connection server daemon (147.75.109.163:44618). Jun 26 07:19:53.876806 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 44618 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:53.879307 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:53.885325 systemd-logind[1553]: New session 15 of user core. Jun 26 07:19:53.891930 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 26 07:19:54.033562 sshd[4189]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:54.039505 systemd[1]: sshd@15-64.23.160.249:22-147.75.109.163:44618.service: Deactivated successfully. Jun 26 07:19:54.045533 systemd[1]: session-15.scope: Deactivated successfully. Jun 26 07:19:54.050118 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Jun 26 07:19:54.057952 systemd[1]: Started sshd@16-64.23.160.249:22-147.75.109.163:44622.service - OpenSSH per-connection server daemon (147.75.109.163:44622). Jun 26 07:19:54.060088 systemd-logind[1553]: Removed session 15. Jun 26 07:19:54.108128 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 44622 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:54.109823 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:54.116745 systemd-logind[1553]: New session 16 of user core. Jun 26 07:19:54.124570 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 26 07:19:54.475233 sshd[4203]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:54.480727 systemd[1]: Started sshd@17-64.23.160.249:22-147.75.109.163:44628.service - OpenSSH per-connection server daemon (147.75.109.163:44628). Jun 26 07:19:54.489337 systemd[1]: sshd@16-64.23.160.249:22-147.75.109.163:44622.service: Deactivated successfully. Jun 26 07:19:54.500829 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Jun 26 07:19:54.502542 systemd[1]: session-16.scope: Deactivated successfully. Jun 26 07:19:54.509879 systemd-logind[1553]: Removed session 16. Jun 26 07:19:54.575715 sshd[4216]: Accepted publickey for core from 147.75.109.163 port 44628 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:54.578139 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:54.584505 systemd-logind[1553]: New session 17 of user core. Jun 26 07:19:54.588412 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 26 07:19:55.803287 sshd[4216]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:55.820961 systemd[1]: Started sshd@18-64.23.160.249:22-147.75.109.163:44634.service - OpenSSH per-connection server daemon (147.75.109.163:44634). Jun 26 07:19:55.821475 systemd[1]: sshd@17-64.23.160.249:22-147.75.109.163:44628.service: Deactivated successfully. Jun 26 07:19:55.836797 systemd[1]: session-17.scope: Deactivated successfully. Jun 26 07:19:55.842783 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Jun 26 07:19:55.847277 systemd-logind[1553]: Removed session 17. Jun 26 07:19:55.900837 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 44634 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:55.902525 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:55.915185 systemd-logind[1553]: New session 18 of user core. Jun 26 07:19:55.917425 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 26 07:19:56.417123 sshd[4233]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:56.431853 systemd[1]: Started sshd@19-64.23.160.249:22-147.75.109.163:56174.service - OpenSSH per-connection server daemon (147.75.109.163:56174). Jun 26 07:19:56.435488 systemd[1]: sshd@18-64.23.160.249:22-147.75.109.163:44634.service: Deactivated successfully. Jun 26 07:19:56.444486 systemd[1]: session-18.scope: Deactivated successfully. Jun 26 07:19:56.448111 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Jun 26 07:19:56.451155 systemd-logind[1553]: Removed session 18. Jun 26 07:19:56.488577 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 56174 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:56.490940 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:56.498334 systemd-logind[1553]: New session 19 of user core. Jun 26 07:19:56.507492 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 26 07:19:56.651728 sshd[4247]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:56.655652 systemd[1]: sshd@19-64.23.160.249:22-147.75.109.163:56174.service: Deactivated successfully. Jun 26 07:19:56.662199 systemd[1]: session-19.scope: Deactivated successfully. Jun 26 07:19:56.663842 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Jun 26 07:19:56.665403 systemd-logind[1553]: Removed session 19. Jun 26 07:19:58.073448 kubelet[2689]: E0626 07:19:58.073330 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:01.674277 systemd[1]: Started sshd@20-64.23.160.249:22-147.75.109.163:56176.service - OpenSSH per-connection server daemon (147.75.109.163:56176). Jun 26 07:20:01.737661 sshd[4263]: Accepted publickey for core from 147.75.109.163 port 56176 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:01.737814 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:01.753803 systemd-logind[1553]: New session 20 of user core. Jun 26 07:20:01.760477 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 26 07:20:01.956740 sshd[4263]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:01.962619 systemd[1]: sshd@20-64.23.160.249:22-147.75.109.163:56176.service: Deactivated successfully. Jun 26 07:20:01.963422 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Jun 26 07:20:01.971113 systemd[1]: session-20.scope: Deactivated successfully. Jun 26 07:20:01.976327 systemd-logind[1553]: Removed session 20. Jun 26 07:20:03.073394 kubelet[2689]: E0626 07:20:03.072841 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:06.966704 systemd[1]: Started sshd@21-64.23.160.249:22-147.75.109.163:50500.service - OpenSSH per-connection server daemon (147.75.109.163:50500). Jun 26 07:20:07.021875 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 50500 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:07.023184 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:07.029044 systemd-logind[1553]: New session 21 of user core. Jun 26 07:20:07.038633 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 26 07:20:07.178358 sshd[4280]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:07.185622 systemd[1]: sshd@21-64.23.160.249:22-147.75.109.163:50500.service: Deactivated successfully. Jun 26 07:20:07.191795 systemd[1]: session-21.scope: Deactivated successfully. Jun 26 07:20:07.194286 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Jun 26 07:20:07.195570 systemd-logind[1553]: Removed session 21. Jun 26 07:20:12.073539 kubelet[2689]: E0626 07:20:12.072968 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:12.074289 kubelet[2689]: E0626 07:20:12.074054 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:12.074988 kubelet[2689]: E0626 07:20:12.074903 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:12.186431 systemd[1]: Started sshd@22-64.23.160.249:22-147.75.109.163:50510.service - OpenSSH per-connection server daemon (147.75.109.163:50510). Jun 26 07:20:12.234057 sshd[4294]: Accepted publickey for core from 147.75.109.163 port 50510 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:12.236810 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:12.249900 systemd-logind[1553]: New session 22 of user core. Jun 26 07:20:12.255451 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 26 07:20:12.421548 sshd[4294]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:12.429632 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Jun 26 07:20:12.431728 systemd[1]: sshd@22-64.23.160.249:22-147.75.109.163:50510.service: Deactivated successfully. Jun 26 07:20:12.436724 systemd[1]: session-22.scope: Deactivated successfully. Jun 26 07:20:12.438595 systemd-logind[1553]: Removed session 22. Jun 26 07:20:17.431497 systemd[1]: Started sshd@23-64.23.160.249:22-147.75.109.163:59946.service - OpenSSH per-connection server daemon (147.75.109.163:59946). Jun 26 07:20:17.492214 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 59946 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:17.494643 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:17.502292 systemd-logind[1553]: New session 23 of user core. Jun 26 07:20:17.505357 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 26 07:20:17.653875 sshd[4310]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:17.659805 systemd[1]: sshd@23-64.23.160.249:22-147.75.109.163:59946.service: Deactivated successfully. Jun 26 07:20:17.664015 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Jun 26 07:20:17.664526 systemd[1]: session-23.scope: Deactivated successfully. Jun 26 07:20:17.666855 systemd-logind[1553]: Removed session 23. Jun 26 07:20:22.663356 systemd[1]: Started sshd@24-64.23.160.249:22-147.75.109.163:59960.service - OpenSSH per-connection server daemon (147.75.109.163:59960). Jun 26 07:20:22.719872 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 59960 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:22.720668 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:22.732996 systemd-logind[1553]: New session 24 of user core. Jun 26 07:20:22.737448 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 26 07:20:22.883283 sshd[4326]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:22.888852 systemd[1]: sshd@24-64.23.160.249:22-147.75.109.163:59960.service: Deactivated successfully. Jun 26 07:20:22.896137 systemd-logind[1553]: Session 24 logged out. Waiting for processes to exit. Jun 26 07:20:22.896288 systemd[1]: session-24.scope: Deactivated successfully. Jun 26 07:20:22.898525 systemd-logind[1553]: Removed session 24. Jun 26 07:20:27.894366 systemd[1]: Started sshd@25-64.23.160.249:22-147.75.109.163:59978.service - OpenSSH per-connection server daemon (147.75.109.163:59978). Jun 26 07:20:27.944649 sshd[4340]: Accepted publickey for core from 147.75.109.163 port 59978 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:27.947303 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:27.953001 systemd-logind[1553]: New session 25 of user core. Jun 26 07:20:27.960413 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 26 07:20:28.072740 kubelet[2689]: E0626 07:20:28.072691 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:28.125274 sshd[4340]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:28.150435 systemd[1]: Started sshd@26-64.23.160.249:22-147.75.109.163:59990.service - OpenSSH per-connection server daemon (147.75.109.163:59990). Jun 26 07:20:28.152441 systemd[1]: sshd@25-64.23.160.249:22-147.75.109.163:59978.service: Deactivated successfully. Jun 26 07:20:28.156378 systemd[1]: session-25.scope: Deactivated successfully. Jun 26 07:20:28.160709 systemd-logind[1553]: Session 25 logged out. Waiting for processes to exit. Jun 26 07:20:28.164145 systemd-logind[1553]: Removed session 25. Jun 26 07:20:28.217941 sshd[4351]: Accepted publickey for core from 147.75.109.163 port 59990 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:28.220455 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:28.227227 systemd-logind[1553]: New session 26 of user core. Jun 26 07:20:28.240938 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 26 07:20:29.763912 containerd[1585]: time="2024-06-26T07:20:29.763546420Z" level=info msg="StopContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" with timeout 30 (s)" Jun 26 07:20:29.775015 containerd[1585]: time="2024-06-26T07:20:29.774470493Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:20:29.778858 containerd[1585]: time="2024-06-26T07:20:29.778813230Z" level=info msg="Stop container \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" with signal terminated" Jun 26 07:20:29.779688 containerd[1585]: time="2024-06-26T07:20:29.779653985Z" level=info msg="StopContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" with timeout 2 (s)" Jun 26 07:20:29.780400 containerd[1585]: time="2024-06-26T07:20:29.780373129Z" level=info msg="Stop container \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" with signal terminated" Jun 26 07:20:29.807026 systemd-networkd[1228]: lxc_health: Link DOWN Jun 26 07:20:29.807038 systemd-networkd[1228]: lxc_health: Lost carrier Jun 26 07:20:29.863603 containerd[1585]: time="2024-06-26T07:20:29.862953137Z" level=info msg="shim disconnected" id=0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59 namespace=k8s.io Jun 26 07:20:29.863603 containerd[1585]: time="2024-06-26T07:20:29.863240405Z" level=warning msg="cleaning up after shim disconnected" id=0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59 namespace=k8s.io Jun 26 07:20:29.863603 containerd[1585]: time="2024-06-26T07:20:29.863251374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:29.864916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59-rootfs.mount: Deactivated successfully. Jun 26 07:20:29.885721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4-rootfs.mount: Deactivated successfully. Jun 26 07:20:29.889290 containerd[1585]: time="2024-06-26T07:20:29.888839028Z" level=info msg="shim disconnected" id=76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4 namespace=k8s.io Jun 26 07:20:29.889290 containerd[1585]: time="2024-06-26T07:20:29.888922133Z" level=warning msg="cleaning up after shim disconnected" id=76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4 namespace=k8s.io Jun 26 07:20:29.889290 containerd[1585]: time="2024-06-26T07:20:29.888936037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:29.916225 containerd[1585]: time="2024-06-26T07:20:29.916028915Z" level=info msg="StopContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" returns successfully" Jun 26 07:20:29.917671 containerd[1585]: time="2024-06-26T07:20:29.917054525Z" level=info msg="StopPodSandbox for \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\"" Jun 26 07:20:29.917671 containerd[1585]: time="2024-06-26T07:20:29.917308636Z" level=info msg="StopContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" returns successfully" Jun 26 07:20:29.917911 containerd[1585]: time="2024-06-26T07:20:29.917770526Z" level=info msg="StopPodSandbox for \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\"" Jun 26 07:20:29.921094 containerd[1585]: time="2024-06-26T07:20:29.917824322Z" level=info msg="Container to stop \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.921094 containerd[1585]: time="2024-06-26T07:20:29.921094585Z" level=info msg="Container to stop \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.921270 containerd[1585]: time="2024-06-26T07:20:29.921113009Z" level=info msg="Container to stop \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.921270 containerd[1585]: time="2024-06-26T07:20:29.921124641Z" level=info msg="Container to stop \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.921270 containerd[1585]: time="2024-06-26T07:20:29.921135169Z" level=info msg="Container to stop \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.922908 containerd[1585]: time="2024-06-26T07:20:29.917114087Z" level=info msg="Container to stop \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:20:29.924364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6-shm.mount: Deactivated successfully. Jun 26 07:20:29.929797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6-shm.mount: Deactivated successfully. Jun 26 07:20:29.984843 containerd[1585]: time="2024-06-26T07:20:29.984668078Z" level=info msg="shim disconnected" id=6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6 namespace=k8s.io Jun 26 07:20:29.985198 containerd[1585]: time="2024-06-26T07:20:29.984928971Z" level=warning msg="cleaning up after shim disconnected" id=6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6 namespace=k8s.io Jun 26 07:20:29.985198 containerd[1585]: time="2024-06-26T07:20:29.984943109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:30.006515 containerd[1585]: time="2024-06-26T07:20:30.006407994Z" level=info msg="shim disconnected" id=49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6 namespace=k8s.io Jun 26 07:20:30.007206 containerd[1585]: time="2024-06-26T07:20:30.006759030Z" level=warning msg="cleaning up after shim disconnected" id=49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6 namespace=k8s.io Jun 26 07:20:30.007206 containerd[1585]: time="2024-06-26T07:20:30.006785275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:30.021203 containerd[1585]: time="2024-06-26T07:20:30.020769643Z" level=info msg="TearDown network for sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" successfully" Jun 26 07:20:30.021203 containerd[1585]: time="2024-06-26T07:20:30.020841590Z" level=info msg="StopPodSandbox for \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" returns successfully" Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049274 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hostproc\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049343 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-kernel\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049382 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-config-path\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049411 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-net\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049442 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-clustermesh-secrets\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.049834 kubelet[2689]: I0626 07:20:30.049471 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g589c\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-kube-api-access-g589c\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049498 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-etc-cni-netd\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049524 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cni-path\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049554 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-xtables-lock\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049578 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-lib-modules\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049604 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-bpf-maps\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054213 kubelet[2689]: I0626 07:20:30.049636 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-cgroup\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054491 containerd[1585]: time="2024-06-26T07:20:30.049930906Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:20:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:20:30.054556 kubelet[2689]: I0626 07:20:30.049670 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hubble-tls\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054556 kubelet[2689]: I0626 07:20:30.049697 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-run\") pod \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\" (UID: \"60c1ade2-aff0-4986-8665-b46fb5cc8ed1\") " Jun 26 07:20:30.054556 kubelet[2689]: I0626 07:20:30.051161 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hostproc" (OuterVolumeSpecName: "hostproc") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.054556 kubelet[2689]: I0626 07:20:30.051251 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.054556 kubelet[2689]: I0626 07:20:30.052507 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cni-path" (OuterVolumeSpecName: "cni-path") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.054807 kubelet[2689]: I0626 07:20:30.052961 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.057331 kubelet[2689]: I0626 07:20:30.056096 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.057331 kubelet[2689]: I0626 07:20:30.056226 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.057331 kubelet[2689]: I0626 07:20:30.056266 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.057331 kubelet[2689]: I0626 07:20:30.056288 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.057331 kubelet[2689]: I0626 07:20:30.056311 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.058512 containerd[1585]: time="2024-06-26T07:20:30.058248824Z" level=info msg="TearDown network for sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" successfully" Jun 26 07:20:30.058512 containerd[1585]: time="2024-06-26T07:20:30.058408299Z" level=info msg="StopPodSandbox for \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" returns successfully" Jun 26 07:20:30.060273 kubelet[2689]: I0626 07:20:30.060221 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:20:30.077336 kubelet[2689]: I0626 07:20:30.076795 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:20:30.078320 kubelet[2689]: I0626 07:20:30.078088 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:20:30.081059 kubelet[2689]: I0626 07:20:30.080897 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-kube-api-access-g589c" (OuterVolumeSpecName: "kube-api-access-g589c") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "kube-api-access-g589c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:20:30.081059 kubelet[2689]: I0626 07:20:30.081013 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "60c1ade2-aff0-4986-8665-b46fb5cc8ed1" (UID: "60c1ade2-aff0-4986-8665-b46fb5cc8ed1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150243 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvzt\" (UniqueName: \"kubernetes.io/projected/b6f61c72-2103-49ca-a367-934173367795-kube-api-access-8fvzt\") pod \"b6f61c72-2103-49ca-a367-934173367795\" (UID: \"b6f61c72-2103-49ca-a367-934173367795\") " Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150305 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f61c72-2103-49ca-a367-934173367795-cilium-config-path\") pod \"b6f61c72-2103-49ca-a367-934173367795\" (UID: \"b6f61c72-2103-49ca-a367-934173367795\") " Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150352 2689 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hubble-tls\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150368 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-run\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150378 2689 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-hostproc\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150391 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-config-path\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.151948 kubelet[2689]: I0626 07:20:30.150402 2689 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-net\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150415 2689 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-host-proc-sys-kernel\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150429 2689 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-clustermesh-secrets\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150444 2689 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g589c\" (UniqueName: \"kubernetes.io/projected/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-kube-api-access-g589c\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150458 2689 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cni-path\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150471 2689 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-xtables-lock\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150484 2689 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-lib-modules\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150504 2689 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-etc-cni-netd\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152396 kubelet[2689]: I0626 07:20:30.150517 2689 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-bpf-maps\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.152735 kubelet[2689]: I0626 07:20:30.150530 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60c1ade2-aff0-4986-8665-b46fb5cc8ed1-cilium-cgroup\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.154479 kubelet[2689]: I0626 07:20:30.154425 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6f61c72-2103-49ca-a367-934173367795-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6f61c72-2103-49ca-a367-934173367795" (UID: "b6f61c72-2103-49ca-a367-934173367795"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:20:30.155520 kubelet[2689]: I0626 07:20:30.155475 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f61c72-2103-49ca-a367-934173367795-kube-api-access-8fvzt" (OuterVolumeSpecName: "kube-api-access-8fvzt") pod "b6f61c72-2103-49ca-a367-934173367795" (UID: "b6f61c72-2103-49ca-a367-934173367795"). InnerVolumeSpecName "kube-api-access-8fvzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:20:30.251059 kubelet[2689]: I0626 07:20:30.250950 2689 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8fvzt\" (UniqueName: \"kubernetes.io/projected/b6f61c72-2103-49ca-a367-934173367795-kube-api-access-8fvzt\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.251059 kubelet[2689]: I0626 07:20:30.251039 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f61c72-2103-49ca-a367-934173367795-cilium-config-path\") on node \"ci-4012.0.0-9-ba53898dab\" DevicePath \"\"" Jun 26 07:20:30.295304 kubelet[2689]: E0626 07:20:30.295138 2689 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 26 07:20:30.612047 kubelet[2689]: I0626 07:20:30.611527 2689 scope.go:117] "RemoveContainer" containerID="0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59" Jun 26 07:20:30.617050 containerd[1585]: time="2024-06-26T07:20:30.617005238Z" level=info msg="RemoveContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\"" Jun 26 07:20:30.633387 containerd[1585]: time="2024-06-26T07:20:30.633242301Z" level=info msg="RemoveContainer for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" returns successfully" Jun 26 07:20:30.651331 kubelet[2689]: I0626 07:20:30.651287 2689 scope.go:117] "RemoveContainer" containerID="0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59" Jun 26 07:20:30.653387 containerd[1585]: time="2024-06-26T07:20:30.652073388Z" level=error msg="ContainerStatus for \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\": not found" Jun 26 07:20:30.653600 kubelet[2689]: E0626 07:20:30.652400 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\": not found" containerID="0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59" Jun 26 07:20:30.659200 kubelet[2689]: I0626 07:20:30.659121 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59"} err="failed to get container status \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\": rpc error: code = NotFound desc = an error occurred when try to find container \"0eadf353a1a031453478e4573d77073f61970d2116a7fcd51fb880ee557eee59\": not found" Jun 26 07:20:30.659200 kubelet[2689]: I0626 07:20:30.659202 2689 scope.go:117] "RemoveContainer" containerID="76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4" Jun 26 07:20:30.664373 containerd[1585]: time="2024-06-26T07:20:30.664166209Z" level=info msg="RemoveContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\"" Jun 26 07:20:30.669720 containerd[1585]: time="2024-06-26T07:20:30.669593120Z" level=info msg="RemoveContainer for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" returns successfully" Jun 26 07:20:30.670112 kubelet[2689]: I0626 07:20:30.670072 2689 scope.go:117] "RemoveContainer" containerID="ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63" Jun 26 07:20:30.673038 containerd[1585]: time="2024-06-26T07:20:30.672938203Z" level=info msg="RemoveContainer for \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\"" Jun 26 07:20:30.679459 containerd[1585]: time="2024-06-26T07:20:30.678972630Z" level=info msg="RemoveContainer for \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\" returns successfully" Jun 26 07:20:30.681215 kubelet[2689]: I0626 07:20:30.681160 2689 scope.go:117] "RemoveContainer" containerID="bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e" Jun 26 07:20:30.683662 containerd[1585]: time="2024-06-26T07:20:30.683594469Z" level=info msg="RemoveContainer for \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\"" Jun 26 07:20:30.686565 containerd[1585]: time="2024-06-26T07:20:30.686399298Z" level=info msg="RemoveContainer for \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\" returns successfully" Jun 26 07:20:30.687044 kubelet[2689]: I0626 07:20:30.686960 2689 scope.go:117] "RemoveContainer" containerID="f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2" Jun 26 07:20:30.688663 containerd[1585]: time="2024-06-26T07:20:30.688632584Z" level=info msg="RemoveContainer for \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\"" Jun 26 07:20:30.691116 containerd[1585]: time="2024-06-26T07:20:30.691075630Z" level=info msg="RemoveContainer for \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\" returns successfully" Jun 26 07:20:30.691424 kubelet[2689]: I0626 07:20:30.691400 2689 scope.go:117] "RemoveContainer" containerID="ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034" Jun 26 07:20:30.692481 containerd[1585]: time="2024-06-26T07:20:30.692417983Z" level=info msg="RemoveContainer for \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\"" Jun 26 07:20:30.694949 containerd[1585]: time="2024-06-26T07:20:30.694849426Z" level=info msg="RemoveContainer for \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\" returns successfully" Jun 26 07:20:30.695341 kubelet[2689]: I0626 07:20:30.695307 2689 scope.go:117] "RemoveContainer" containerID="76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4" Jun 26 07:20:30.695767 containerd[1585]: time="2024-06-26T07:20:30.695633564Z" level=error msg="ContainerStatus for \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\": not found" Jun 26 07:20:30.695864 kubelet[2689]: E0626 07:20:30.695839 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\": not found" containerID="76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4" Jun 26 07:20:30.695910 kubelet[2689]: I0626 07:20:30.695886 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4"} err="failed to get container status \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"76da7073a9f4bb279e9111385400166dbdb79998ac319e705dd6408f615c2cc4\": not found" Jun 26 07:20:30.695910 kubelet[2689]: I0626 07:20:30.695901 2689 scope.go:117] "RemoveContainer" containerID="ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63" Jun 26 07:20:30.696146 containerd[1585]: time="2024-06-26T07:20:30.696111793Z" level=error msg="ContainerStatus for \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\": not found" Jun 26 07:20:30.696616 kubelet[2689]: E0626 07:20:30.696302 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\": not found" containerID="ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63" Jun 26 07:20:30.696616 kubelet[2689]: I0626 07:20:30.696338 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63"} err="failed to get container status \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba84f4f5c10f76e4bd7c67a088e63f82a2835c288b63fb4ae436ed5cba7bce63\": not found" Jun 26 07:20:30.696616 kubelet[2689]: I0626 07:20:30.696350 2689 scope.go:117] "RemoveContainer" containerID="bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e" Jun 26 07:20:30.696828 containerd[1585]: time="2024-06-26T07:20:30.696781662Z" level=error msg="ContainerStatus for \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\": not found" Jun 26 07:20:30.697129 kubelet[2689]: E0626 07:20:30.697104 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\": not found" containerID="bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e" Jun 26 07:20:30.697189 kubelet[2689]: I0626 07:20:30.697181 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e"} err="failed to get container status \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf368c99e3a350f9fb4b877796df5fc8d86578130af84f4e4ca7940a1ba2291e\": not found" Jun 26 07:20:30.697220 kubelet[2689]: I0626 07:20:30.697199 2689 scope.go:117] "RemoveContainer" containerID="f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2" Jun 26 07:20:30.697477 containerd[1585]: time="2024-06-26T07:20:30.697435909Z" level=error msg="ContainerStatus for \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\": not found" Jun 26 07:20:30.697613 kubelet[2689]: E0626 07:20:30.697594 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\": not found" containerID="f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2" Jun 26 07:20:30.697651 kubelet[2689]: I0626 07:20:30.697635 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2"} err="failed to get container status \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f12b121021ba16baebb9e3ef893aef32bf21b2e4af356620cb7a1a99b6e941a2\": not found" Jun 26 07:20:30.697651 kubelet[2689]: I0626 07:20:30.697650 2689 scope.go:117] "RemoveContainer" containerID="ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034" Jun 26 07:20:30.697859 containerd[1585]: time="2024-06-26T07:20:30.697830691Z" level=error msg="ContainerStatus for \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\": not found" Jun 26 07:20:30.698018 kubelet[2689]: E0626 07:20:30.697972 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\": not found" containerID="ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034" Jun 26 07:20:30.698058 kubelet[2689]: I0626 07:20:30.698030 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034"} err="failed to get container status \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\": rpc error: code = NotFound desc = an error occurred when try to find container \"ede82fdb79e7c293e1e9a11c183387dee4e75d48f155f6b704059590d2039034\": not found" Jun 26 07:20:30.728527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6-rootfs.mount: Deactivated successfully. Jun 26 07:20:30.728699 systemd[1]: var-lib-kubelet-pods-b6f61c72\x2d2103\x2d49ca\x2da367\x2d934173367795-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fvzt.mount: Deactivated successfully. Jun 26 07:20:30.728813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6-rootfs.mount: Deactivated successfully. Jun 26 07:20:30.728902 systemd[1]: var-lib-kubelet-pods-60c1ade2\x2daff0\x2d4986\x2d8665\x2db46fb5cc8ed1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg589c.mount: Deactivated successfully. Jun 26 07:20:30.729019 systemd[1]: var-lib-kubelet-pods-60c1ade2\x2daff0\x2d4986\x2d8665\x2db46fb5cc8ed1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 26 07:20:30.729116 systemd[1]: var-lib-kubelet-pods-60c1ade2\x2daff0\x2d4986\x2d8665\x2db46fb5cc8ed1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 26 07:20:31.075406 kubelet[2689]: I0626 07:20:31.075349 2689 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" path="/var/lib/kubelet/pods/60c1ade2-aff0-4986-8665-b46fb5cc8ed1/volumes" Jun 26 07:20:31.076187 kubelet[2689]: I0626 07:20:31.076150 2689 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b6f61c72-2103-49ca-a367-934173367795" path="/var/lib/kubelet/pods/b6f61c72-2103-49ca-a367-934173367795/volumes" Jun 26 07:20:31.647539 sshd[4351]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:31.664863 systemd[1]: Started sshd@27-64.23.160.249:22-147.75.109.163:60002.service - OpenSSH per-connection server daemon (147.75.109.163:60002). Jun 26 07:20:31.667743 systemd[1]: sshd@26-64.23.160.249:22-147.75.109.163:59990.service: Deactivated successfully. Jun 26 07:20:31.688641 systemd[1]: session-26.scope: Deactivated successfully. Jun 26 07:20:31.691160 systemd-logind[1553]: Session 26 logged out. Waiting for processes to exit. Jun 26 07:20:31.698163 systemd-logind[1553]: Removed session 26. Jun 26 07:20:31.732424 sshd[4522]: Accepted publickey for core from 147.75.109.163 port 60002 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:31.735733 sshd[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:31.745445 systemd-logind[1553]: New session 27 of user core. Jun 26 07:20:31.754990 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 26 07:20:32.922709 sshd[4522]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:32.943201 systemd[1]: Started sshd@28-64.23.160.249:22-147.75.109.163:60010.service - OpenSSH per-connection server daemon (147.75.109.163:60010). Jun 26 07:20:32.946921 systemd[1]: sshd@27-64.23.160.249:22-147.75.109.163:60002.service: Deactivated successfully. Jun 26 07:20:32.957188 systemd[1]: session-27.scope: Deactivated successfully. Jun 26 07:20:32.967148 systemd-logind[1553]: Session 27 logged out. Waiting for processes to exit. Jun 26 07:20:32.976210 kubelet[2689]: I0626 07:20:32.975920 2689 topology_manager.go:215] "Topology Admit Handler" podUID="a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d" podNamespace="kube-system" podName="cilium-wlrtg" Jun 26 07:20:32.980499 kubelet[2689]: E0626 07:20:32.980436 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="mount-bpf-fs" Jun 26 07:20:32.981000 kubelet[2689]: E0626 07:20:32.980525 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="clean-cilium-state" Jun 26 07:20:32.981000 kubelet[2689]: E0626 07:20:32.980543 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="cilium-agent" Jun 26 07:20:32.981000 kubelet[2689]: E0626 07:20:32.980559 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="mount-cgroup" Jun 26 07:20:32.981000 kubelet[2689]: E0626 07:20:32.980575 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="apply-sysctl-overwrites" Jun 26 07:20:32.981000 kubelet[2689]: E0626 07:20:32.980586 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f61c72-2103-49ca-a367-934173367795" containerName="cilium-operator" Jun 26 07:20:32.981000 kubelet[2689]: I0626 07:20:32.980653 2689 memory_manager.go:346] "RemoveStaleState removing state" podUID="60c1ade2-aff0-4986-8665-b46fb5cc8ed1" containerName="cilium-agent" Jun 26 07:20:32.981000 kubelet[2689]: I0626 07:20:32.980667 2689 memory_manager.go:346] "RemoveStaleState removing state" podUID="b6f61c72-2103-49ca-a367-934173367795" containerName="cilium-operator" Jun 26 07:20:32.985362 systemd-logind[1553]: Removed session 27. Jun 26 07:20:33.035819 sshd[4536]: Accepted publickey for core from 147.75.109.163 port 60010 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:33.046120 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:33.080297 systemd-logind[1553]: New session 28 of user core. Jun 26 07:20:33.083454 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090217 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-etc-cni-netd\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090287 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-clustermesh-secrets\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090322 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-host-proc-sys-kernel\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090356 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-cilium-cgroup\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090384 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-hubble-tls\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093116 kubelet[2689]: I0626 07:20:33.090411 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-lib-modules\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090442 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-cni-path\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090468 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-xtables-lock\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090496 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-cilium-run\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090525 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-hostproc\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090556 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-cilium-config-path\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093520 kubelet[2689]: I0626 07:20:33.090589 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-cilium-ipsec-secrets\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093776 kubelet[2689]: I0626 07:20:33.090632 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-host-proc-sys-net\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093776 kubelet[2689]: I0626 07:20:33.090662 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qrw\" (UniqueName: \"kubernetes.io/projected/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-kube-api-access-x9qrw\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.093776 kubelet[2689]: I0626 07:20:33.090696 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d-bpf-maps\") pod \"cilium-wlrtg\" (UID: \"a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d\") " pod="kube-system/cilium-wlrtg" Jun 26 07:20:33.174387 sshd[4536]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:33.189411 systemd[1]: Started sshd@29-64.23.160.249:22-147.75.109.163:60016.service - OpenSSH per-connection server daemon (147.75.109.163:60016). Jun 26 07:20:33.190175 systemd[1]: sshd@28-64.23.160.249:22-147.75.109.163:60010.service: Deactivated successfully. Jun 26 07:20:33.214246 systemd[1]: session-28.scope: Deactivated successfully. Jun 26 07:20:33.215203 systemd-logind[1553]: Session 28 logged out. Waiting for processes to exit. Jun 26 07:20:33.219152 systemd-logind[1553]: Removed session 28. Jun 26 07:20:33.295843 sshd[4546]: Accepted publickey for core from 147.75.109.163 port 60016 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:20:33.298542 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:20:33.308440 kubelet[2689]: E0626 07:20:33.306434 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:33.309401 systemd-logind[1553]: New session 29 of user core. Jun 26 07:20:33.316728 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 26 07:20:33.328058 containerd[1585]: time="2024-06-26T07:20:33.326753005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlrtg,Uid:a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d,Namespace:kube-system,Attempt:0,}" Jun 26 07:20:33.372865 containerd[1585]: time="2024-06-26T07:20:33.372645281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:20:33.372865 containerd[1585]: time="2024-06-26T07:20:33.372753208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:20:33.372865 containerd[1585]: time="2024-06-26T07:20:33.372783786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:20:33.372865 containerd[1585]: time="2024-06-26T07:20:33.372803821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:20:33.467535 containerd[1585]: time="2024-06-26T07:20:33.467470488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlrtg,Uid:a4bc6fa6-e6e7-4fe2-8abc-967d0ed33e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\"" Jun 26 07:20:33.471935 kubelet[2689]: E0626 07:20:33.470074 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:33.484153 containerd[1585]: time="2024-06-26T07:20:33.483724600Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 26 07:20:33.498523 containerd[1585]: time="2024-06-26T07:20:33.498457564Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ecc1b2c64295044ddf684e89a87193dcb62852ea8e7ea2b86cc481c9d5f5d420\"" Jun 26 07:20:33.500649 containerd[1585]: time="2024-06-26T07:20:33.499653524Z" level=info msg="StartContainer for \"ecc1b2c64295044ddf684e89a87193dcb62852ea8e7ea2b86cc481c9d5f5d420\"" Jun 26 07:20:33.629546 containerd[1585]: time="2024-06-26T07:20:33.629487018Z" level=info msg="StartContainer for \"ecc1b2c64295044ddf684e89a87193dcb62852ea8e7ea2b86cc481c9d5f5d420\" returns successfully" Jun 26 07:20:33.652965 kubelet[2689]: E0626 07:20:33.652145 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:33.729020 containerd[1585]: time="2024-06-26T07:20:33.728048762Z" level=info msg="shim disconnected" id=ecc1b2c64295044ddf684e89a87193dcb62852ea8e7ea2b86cc481c9d5f5d420 namespace=k8s.io Jun 26 07:20:33.740137 containerd[1585]: time="2024-06-26T07:20:33.728149621Z" level=warning msg="cleaning up after shim disconnected" id=ecc1b2c64295044ddf684e89a87193dcb62852ea8e7ea2b86cc481c9d5f5d420 namespace=k8s.io Jun 26 07:20:33.740391 containerd[1585]: time="2024-06-26T07:20:33.740356623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:34.073494 kubelet[2689]: E0626 07:20:34.072871 2689 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-lrhrr" podUID="ece5d8ef-6449-4f00-8aa2-601ebdd24f8e" Jun 26 07:20:34.666280 kubelet[2689]: E0626 07:20:34.666157 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:34.674000 containerd[1585]: time="2024-06-26T07:20:34.673902309Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 26 07:20:34.700873 containerd[1585]: time="2024-06-26T07:20:34.699550071Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f\"" Jun 26 07:20:34.708096 containerd[1585]: time="2024-06-26T07:20:34.703697223Z" level=info msg="StartContainer for \"8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f\"" Jun 26 07:20:34.804600 containerd[1585]: time="2024-06-26T07:20:34.804536231Z" level=info msg="StartContainer for \"8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f\" returns successfully" Jun 26 07:20:34.855297 containerd[1585]: time="2024-06-26T07:20:34.855195074Z" level=info msg="shim disconnected" id=8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f namespace=k8s.io Jun 26 07:20:34.855669 containerd[1585]: time="2024-06-26T07:20:34.855637317Z" level=warning msg="cleaning up after shim disconnected" id=8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f namespace=k8s.io Jun 26 07:20:34.855803 containerd[1585]: time="2024-06-26T07:20:34.855785567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:35.033169 containerd[1585]: time="2024-06-26T07:20:35.033111683Z" level=info msg="StopPodSandbox for \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\"" Jun 26 07:20:35.033378 containerd[1585]: time="2024-06-26T07:20:35.033302648Z" level=info msg="TearDown network for sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" successfully" Jun 26 07:20:35.033378 containerd[1585]: time="2024-06-26T07:20:35.033348028Z" level=info msg="StopPodSandbox for \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" returns successfully" Jun 26 07:20:35.034952 containerd[1585]: time="2024-06-26T07:20:35.034143800Z" level=info msg="RemovePodSandbox for \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\"" Jun 26 07:20:35.038039 containerd[1585]: time="2024-06-26T07:20:35.037949494Z" level=info msg="Forcibly stopping sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\"" Jun 26 07:20:35.048612 containerd[1585]: time="2024-06-26T07:20:35.038181455Z" level=info msg="TearDown network for sandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" successfully" Jun 26 07:20:35.060055 containerd[1585]: time="2024-06-26T07:20:35.057625655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:20:35.060055 containerd[1585]: time="2024-06-26T07:20:35.057777797Z" level=info msg="RemovePodSandbox \"6aa1e1f4285987cfd8bd53c80946fed0c6c95f537f70ab6e4b06eeb6d12fe4b6\" returns successfully" Jun 26 07:20:35.061482 containerd[1585]: time="2024-06-26T07:20:35.061012190Z" level=info msg="StopPodSandbox for \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\"" Jun 26 07:20:35.061482 containerd[1585]: time="2024-06-26T07:20:35.061176847Z" level=info msg="TearDown network for sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" successfully" Jun 26 07:20:35.061482 containerd[1585]: time="2024-06-26T07:20:35.061196166Z" level=info msg="StopPodSandbox for \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" returns successfully" Jun 26 07:20:35.063031 containerd[1585]: time="2024-06-26T07:20:35.061915328Z" level=info msg="RemovePodSandbox for \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\"" Jun 26 07:20:35.063031 containerd[1585]: time="2024-06-26T07:20:35.061963748Z" level=info msg="Forcibly stopping sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\"" Jun 26 07:20:35.063031 containerd[1585]: time="2024-06-26T07:20:35.062059331Z" level=info msg="TearDown network for sandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" successfully" Jun 26 07:20:35.067306 containerd[1585]: time="2024-06-26T07:20:35.067239380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 26 07:20:35.067620 containerd[1585]: time="2024-06-26T07:20:35.067551820Z" level=info msg="RemovePodSandbox \"49b8d196386407f22823aea83c77a06302d7f250e00161430f646378551c15a6\" returns successfully" Jun 26 07:20:35.228209 systemd[1]: run-containerd-runc-k8s.io-8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f-runc.U5hJN4.mount: Deactivated successfully. Jun 26 07:20:35.228409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b232eb240423cb583f909953ab67aa371d0c59329455319fd4e31e628dd551f-rootfs.mount: Deactivated successfully. Jun 26 07:20:35.298134 kubelet[2689]: E0626 07:20:35.297958 2689 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 26 07:20:35.674324 kubelet[2689]: E0626 07:20:35.673574 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:35.681215 containerd[1585]: time="2024-06-26T07:20:35.681169410Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 26 07:20:35.722843 containerd[1585]: time="2024-06-26T07:20:35.720354190Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e\"" Jun 26 07:20:35.727249 containerd[1585]: time="2024-06-26T07:20:35.726763263Z" level=info msg="StartContainer for \"1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e\"" Jun 26 07:20:35.839242 containerd[1585]: time="2024-06-26T07:20:35.838532018Z" level=info msg="StartContainer for \"1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e\" returns successfully" Jun 26 07:20:35.879164 containerd[1585]: time="2024-06-26T07:20:35.878620320Z" level=info msg="shim disconnected" id=1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e namespace=k8s.io Jun 26 07:20:35.879164 containerd[1585]: time="2024-06-26T07:20:35.878922736Z" level=warning msg="cleaning up after shim disconnected" id=1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e namespace=k8s.io Jun 26 07:20:35.879164 containerd[1585]: time="2024-06-26T07:20:35.878934809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:35.900478 containerd[1585]: time="2024-06-26T07:20:35.900381302Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:20:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:20:36.073255 kubelet[2689]: E0626 07:20:36.072722 2689 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-lrhrr" podUID="ece5d8ef-6449-4f00-8aa2-601ebdd24f8e" Jun 26 07:20:36.227699 systemd[1]: run-containerd-runc-k8s.io-1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e-runc.sIx7nd.mount: Deactivated successfully. Jun 26 07:20:36.227906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3c6dc066ef74c0c20b643d895d4edde2f19342d3f5f95dda78a477f4a6a68e-rootfs.mount: Deactivated successfully. Jun 26 07:20:36.679877 kubelet[2689]: E0626 07:20:36.679834 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:36.688211 containerd[1585]: time="2024-06-26T07:20:36.686570832Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 26 07:20:36.710381 containerd[1585]: time="2024-06-26T07:20:36.707884501Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a\"" Jun 26 07:20:36.710381 containerd[1585]: time="2024-06-26T07:20:36.709660490Z" level=info msg="StartContainer for \"1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a\"" Jun 26 07:20:36.805391 containerd[1585]: time="2024-06-26T07:20:36.805323750Z" level=info msg="StartContainer for \"1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a\" returns successfully" Jun 26 07:20:36.831316 containerd[1585]: time="2024-06-26T07:20:36.831219727Z" level=info msg="shim disconnected" id=1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a namespace=k8s.io Jun 26 07:20:36.831316 containerd[1585]: time="2024-06-26T07:20:36.831315818Z" level=warning msg="cleaning up after shim disconnected" id=1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a namespace=k8s.io Jun 26 07:20:36.831889 containerd[1585]: time="2024-06-26T07:20:36.831402573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:20:36.853328 containerd[1585]: time="2024-06-26T07:20:36.853243297Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:20:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:20:37.227757 systemd[1]: run-containerd-runc-k8s.io-1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a-runc.eScoIf.mount: Deactivated successfully. Jun 26 07:20:37.228049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e5f20f209751d31ad966db1097f6fcd1db9f7564323c014fc0f83721e82655a-rootfs.mount: Deactivated successfully. Jun 26 07:20:37.685399 kubelet[2689]: E0626 07:20:37.684611 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:37.692500 containerd[1585]: time="2024-06-26T07:20:37.692322204Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 26 07:20:37.719263 containerd[1585]: time="2024-06-26T07:20:37.719169712Z" level=info msg="CreateContainer within sandbox \"112102a7af1700def6b93267215deafb784c242c6137b2515cf8c6830111d04a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b16fadf074874704675a72e3aadcb014974c9c154e8e6cbd75f058bfdf95eb7\"" Jun 26 07:20:37.727680 containerd[1585]: time="2024-06-26T07:20:37.727026871Z" level=info msg="StartContainer for \"1b16fadf074874704675a72e3aadcb014974c9c154e8e6cbd75f058bfdf95eb7\"" Jun 26 07:20:37.809589 containerd[1585]: time="2024-06-26T07:20:37.809528939Z" level=info msg="StartContainer for \"1b16fadf074874704675a72e3aadcb014974c9c154e8e6cbd75f058bfdf95eb7\" returns successfully" Jun 26 07:20:38.065108 kubelet[2689]: I0626 07:20:38.063718 2689 setters.go:552] "Node became not ready" node="ci-4012.0.0-9-ba53898dab" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-26T07:20:38Z","lastTransitionTime":"2024-06-26T07:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 26 07:20:38.072842 kubelet[2689]: E0626 07:20:38.072767 2689 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-lrhrr" podUID="ece5d8ef-6449-4f00-8aa2-601ebdd24f8e" Jun 26 07:20:38.486073 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 26 07:20:38.695279 kubelet[2689]: E0626 07:20:38.695067 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:38.719837 kubelet[2689]: I0626 07:20:38.719782 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wlrtg" podStartSLOduration=6.719712536 podCreationTimestamp="2024-06-26 07:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:20:38.718423767 +0000 UTC m=+123.841161051" watchObservedRunningTime="2024-06-26 07:20:38.719712536 +0000 UTC m=+123.842449806" Jun 26 07:20:39.706052 kubelet[2689]: E0626 07:20:39.704705 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:40.073408 kubelet[2689]: E0626 07:20:40.072834 2689 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-lrhrr" podUID="ece5d8ef-6449-4f00-8aa2-601ebdd24f8e" Jun 26 07:20:41.880252 systemd-networkd[1228]: lxc_health: Link UP Jun 26 07:20:41.880587 systemd-networkd[1228]: lxc_health: Gained carrier Jun 26 07:20:42.073266 kubelet[2689]: E0626 07:20:42.072517 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:43.313023 kubelet[2689]: E0626 07:20:43.310825 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:43.714399 kubelet[2689]: E0626 07:20:43.714353 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:43.804351 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jun 26 07:20:44.485593 systemd[1]: run-containerd-runc-k8s.io-1b16fadf074874704675a72e3aadcb014974c9c154e8e6cbd75f058bfdf95eb7-runc.qcJpRY.mount: Deactivated successfully. Jun 26 07:20:44.717014 kubelet[2689]: E0626 07:20:44.716833 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 26 07:20:46.684381 systemd[1]: run-containerd-runc-k8s.io-1b16fadf074874704675a72e3aadcb014974c9c154e8e6cbd75f058bfdf95eb7-runc.L97oeI.mount: Deactivated successfully. Jun 26 07:20:48.918271 sshd[4546]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:48.925230 systemd[1]: sshd@29-64.23.160.249:22-147.75.109.163:60016.service: Deactivated successfully. Jun 26 07:20:48.930852 systemd[1]: session-29.scope: Deactivated successfully. Jun 26 07:20:48.932221 systemd-logind[1553]: Session 29 logged out. Waiting for processes to exit. Jun 26 07:20:48.933507 systemd-logind[1553]: Removed session 29.