Aug 6 07:42:34.915056 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 6 07:42:34.915087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:42:34.915102 kernel: BIOS-provided physical RAM map: Aug 6 07:42:34.915109 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 6 07:42:34.915115 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 6 07:42:34.915122 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 6 07:42:34.915130 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 6 07:42:34.915136 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 6 07:42:34.915143 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 6 07:42:34.915153 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 6 07:42:34.915160 kernel: NX (Execute Disable) protection: active Aug 6 07:42:34.915167 kernel: APIC: Static calls initialized Aug 6 07:42:34.915174 kernel: SMBIOS 2.8 present. Aug 6 07:42:34.915181 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 6 07:42:34.915190 kernel: Hypervisor detected: KVM Aug 6 07:42:34.915201 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 6 07:42:34.915208 kernel: kvm-clock: using sched offset of 3055545089 cycles Aug 6 07:42:34.915221 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 6 07:42:34.915229 kernel: tsc: Detected 2494.140 MHz processor Aug 6 07:42:34.915239 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 6 07:42:34.915255 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 6 07:42:34.915264 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 6 07:42:34.915272 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 6 07:42:34.915279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 6 07:42:34.915292 kernel: ACPI: Early table checksum verification disabled Aug 6 07:42:34.915299 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 6 07:42:34.915307 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915315 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915323 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915331 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 6 07:42:34.915338 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915346 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915354 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915365 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:42:34.915372 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 6 07:42:34.915380 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 6 07:42:34.915388 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 6 07:42:34.915395 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 6 07:42:34.915403 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 6 07:42:34.915411 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 6 07:42:34.915426 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 6 07:42:34.915434 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 6 07:42:34.915442 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 6 07:42:34.915450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 6 07:42:34.915459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 6 07:42:34.915467 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 6 07:42:34.915475 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 6 07:42:34.915487 kernel: Zone ranges: Aug 6 07:42:34.915495 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 6 07:42:34.915504 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 6 07:42:34.915512 kernel: Normal empty Aug 6 07:42:34.915520 kernel: Movable zone start for each node Aug 6 07:42:34.915528 kernel: Early memory node ranges Aug 6 07:42:34.915536 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 6 07:42:34.915544 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 6 07:42:34.915552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 6 07:42:34.915564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 6 07:42:34.915572 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 6 07:42:34.915580 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 6 07:42:34.915588 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 6 07:42:34.915596 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 6 07:42:34.915604 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 6 07:42:34.915612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 6 07:42:34.915621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 6 07:42:34.915631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 6 07:42:34.915648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 6 07:42:34.915665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 6 07:42:34.915678 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 6 07:42:34.915688 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 6 07:42:34.915696 kernel: TSC deadline timer available Aug 6 07:42:34.915704 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 6 07:42:34.915712 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 6 07:42:34.915720 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 6 07:42:34.915728 kernel: Booting paravirtualized kernel on KVM Aug 6 07:42:34.915739 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 6 07:42:34.915752 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 6 07:42:34.915760 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 6 07:42:34.915769 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 6 07:42:34.915777 kernel: pcpu-alloc: [0] 0 1 Aug 6 07:42:34.915784 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 6 07:42:34.915794 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:42:34.915802 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 6 07:42:34.915811 kernel: random: crng init done Aug 6 07:42:34.915827 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 6 07:42:34.915840 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 6 07:42:34.915855 kernel: Fallback order for Node 0: 0 Aug 6 07:42:34.915865 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 6 07:42:34.915873 kernel: Policy zone: DMA32 Aug 6 07:42:34.915882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 6 07:42:34.915891 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 131292K reserved, 0K cma-reserved) Aug 6 07:42:34.915899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 6 07:42:34.915911 kernel: Kernel/User page tables isolation: enabled Aug 6 07:42:34.915919 kernel: ftrace: allocating 37659 entries in 148 pages Aug 6 07:42:34.915928 kernel: ftrace: allocated 148 pages with 3 groups Aug 6 07:42:34.915936 kernel: Dynamic Preempt: voluntary Aug 6 07:42:34.915944 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 6 07:42:34.915953 kernel: rcu: RCU event tracing is enabled. Aug 6 07:42:34.915962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 6 07:42:34.916004 kernel: Trampoline variant of Tasks RCU enabled. Aug 6 07:42:34.916012 kernel: Rude variant of Tasks RCU enabled. Aug 6 07:42:34.916020 kernel: Tracing variant of Tasks RCU enabled. Aug 6 07:42:34.916032 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 6 07:42:34.916041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 6 07:42:34.916049 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 6 07:42:34.916057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 6 07:42:34.916065 kernel: Console: colour VGA+ 80x25 Aug 6 07:42:34.916073 kernel: printk: console [tty0] enabled Aug 6 07:42:34.916081 kernel: printk: console [ttyS0] enabled Aug 6 07:42:34.916090 kernel: ACPI: Core revision 20230628 Aug 6 07:42:34.916098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 6 07:42:34.916111 kernel: APIC: Switch to symmetric I/O mode setup Aug 6 07:42:34.916119 kernel: x2apic enabled Aug 6 07:42:34.916127 kernel: APIC: Switched APIC routing to: physical x2apic Aug 6 07:42:34.916138 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 6 07:42:34.916147 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 6 07:42:34.916155 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 6 07:42:34.916163 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 6 07:42:34.916172 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 6 07:42:34.916195 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 6 07:42:34.916210 kernel: Spectre V2 : Mitigation: Retpolines Aug 6 07:42:34.916223 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 6 07:42:34.916241 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 6 07:42:34.916255 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 6 07:42:34.916270 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 6 07:42:34.916284 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 6 07:42:34.916296 kernel: MDS: Mitigation: Clear CPU buffers Aug 6 07:42:34.916310 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 6 07:42:34.916333 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 6 07:42:34.916345 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 6 07:42:34.916359 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 6 07:42:34.916372 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 6 07:42:34.916385 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 6 07:42:34.916399 kernel: Freeing SMP alternatives memory: 32K Aug 6 07:42:34.916411 kernel: pid_max: default: 32768 minimum: 301 Aug 6 07:42:34.916426 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 6 07:42:34.916445 kernel: SELinux: Initializing. Aug 6 07:42:34.916461 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:42:34.916475 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:42:34.916491 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 6 07:42:34.916504 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:42:34.916517 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:42:34.916531 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:42:34.916545 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 6 07:42:34.916559 kernel: signal: max sigframe size: 1776 Aug 6 07:42:34.916578 kernel: rcu: Hierarchical SRCU implementation. Aug 6 07:42:34.916588 kernel: rcu: Max phase no-delay instances is 400. Aug 6 07:42:34.916597 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 6 07:42:34.916606 kernel: smp: Bringing up secondary CPUs ... Aug 6 07:42:34.916615 kernel: smpboot: x86: Booting SMP configuration: Aug 6 07:42:34.916624 kernel: .... node #0, CPUs: #1 Aug 6 07:42:34.916633 kernel: smp: Brought up 1 node, 2 CPUs Aug 6 07:42:34.916641 kernel: smpboot: Max logical packages: 1 Aug 6 07:42:34.916650 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 6 07:42:34.916663 kernel: devtmpfs: initialized Aug 6 07:42:34.916672 kernel: x86/mm: Memory block size: 128MB Aug 6 07:42:34.916682 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 6 07:42:34.916691 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 6 07:42:34.916700 kernel: pinctrl core: initialized pinctrl subsystem Aug 6 07:42:34.916708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 6 07:42:34.916717 kernel: audit: initializing netlink subsys (disabled) Aug 6 07:42:34.916731 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 6 07:42:34.916740 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 6 07:42:34.916753 kernel: audit: type=2000 audit(1722930154.369:1): state=initialized audit_enabled=0 res=1 Aug 6 07:42:34.916762 kernel: cpuidle: using governor menu Aug 6 07:42:34.916771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 6 07:42:34.916780 kernel: dca service started, version 1.12.1 Aug 6 07:42:34.916789 kernel: PCI: Using configuration type 1 for base access Aug 6 07:42:34.916798 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 6 07:42:34.916807 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 6 07:42:34.916816 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 6 07:42:34.916827 kernel: ACPI: Added _OSI(Module Device) Aug 6 07:42:34.916845 kernel: ACPI: Added _OSI(Processor Device) Aug 6 07:42:34.916859 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 6 07:42:34.916871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 6 07:42:34.916883 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 6 07:42:34.916895 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 6 07:42:34.916909 kernel: ACPI: Interpreter enabled Aug 6 07:42:34.916921 kernel: ACPI: PM: (supports S0 S5) Aug 6 07:42:34.916935 kernel: ACPI: Using IOAPIC for interrupt routing Aug 6 07:42:34.916951 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 6 07:42:34.916993 kernel: PCI: Using E820 reservations for host bridge windows Aug 6 07:42:34.917009 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 6 07:42:34.917022 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 6 07:42:34.917288 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 6 07:42:34.917401 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 6 07:42:34.917499 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 6 07:42:34.917512 kernel: acpiphp: Slot [3] registered Aug 6 07:42:34.917527 kernel: acpiphp: Slot [4] registered Aug 6 07:42:34.917536 kernel: acpiphp: Slot [5] registered Aug 6 07:42:34.917546 kernel: acpiphp: Slot [6] registered Aug 6 07:42:34.917554 kernel: acpiphp: Slot [7] registered Aug 6 07:42:34.917563 kernel: acpiphp: Slot [8] registered Aug 6 07:42:34.917572 kernel: acpiphp: Slot [9] registered Aug 6 07:42:34.917581 kernel: acpiphp: Slot [10] registered Aug 6 07:42:34.917590 kernel: acpiphp: Slot [11] registered Aug 6 07:42:34.917600 kernel: acpiphp: Slot [12] registered Aug 6 07:42:34.917608 kernel: acpiphp: Slot [13] registered Aug 6 07:42:34.917621 kernel: acpiphp: Slot [14] registered Aug 6 07:42:34.917630 kernel: acpiphp: Slot [15] registered Aug 6 07:42:34.917639 kernel: acpiphp: Slot [16] registered Aug 6 07:42:34.917648 kernel: acpiphp: Slot [17] registered Aug 6 07:42:34.917657 kernel: acpiphp: Slot [18] registered Aug 6 07:42:34.917666 kernel: acpiphp: Slot [19] registered Aug 6 07:42:34.917675 kernel: acpiphp: Slot [20] registered Aug 6 07:42:34.917684 kernel: acpiphp: Slot [21] registered Aug 6 07:42:34.917693 kernel: acpiphp: Slot [22] registered Aug 6 07:42:34.917705 kernel: acpiphp: Slot [23] registered Aug 6 07:42:34.917714 kernel: acpiphp: Slot [24] registered Aug 6 07:42:34.917722 kernel: acpiphp: Slot [25] registered Aug 6 07:42:34.917731 kernel: acpiphp: Slot [26] registered Aug 6 07:42:34.917740 kernel: acpiphp: Slot [27] registered Aug 6 07:42:34.917749 kernel: acpiphp: Slot [28] registered Aug 6 07:42:34.917758 kernel: acpiphp: Slot [29] registered Aug 6 07:42:34.917767 kernel: acpiphp: Slot [30] registered Aug 6 07:42:34.917776 kernel: acpiphp: Slot [31] registered Aug 6 07:42:34.917785 kernel: PCI host bridge to bus 0000:00 Aug 6 07:42:34.917927 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 6 07:42:34.918955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 6 07:42:34.921248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 6 07:42:34.921413 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 6 07:42:34.921555 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 6 07:42:34.921683 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 6 07:42:34.921880 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 6 07:42:34.922073 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 6 07:42:34.922192 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 6 07:42:34.922312 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 6 07:42:34.922415 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 6 07:42:34.922546 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 6 07:42:34.922691 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 6 07:42:34.922826 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 6 07:42:34.925235 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 6 07:42:34.925415 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 6 07:42:34.925560 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 6 07:42:34.925662 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 6 07:42:34.925755 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 6 07:42:34.925871 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 6 07:42:34.926009 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 6 07:42:34.926135 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 6 07:42:34.926231 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 6 07:42:34.926366 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 6 07:42:34.926463 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 6 07:42:34.926578 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:42:34.926683 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 6 07:42:34.926780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 6 07:42:34.926873 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 6 07:42:34.929086 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:42:34.929305 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 6 07:42:34.929453 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 6 07:42:34.929598 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 6 07:42:34.929798 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 6 07:42:34.929959 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 6 07:42:34.930157 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 6 07:42:34.930308 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 6 07:42:34.930469 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:42:34.930622 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 6 07:42:34.930772 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 6 07:42:34.930888 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 6 07:42:34.931756 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:42:34.931878 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 6 07:42:34.932040 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 6 07:42:34.932197 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 6 07:42:34.932326 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 6 07:42:34.932424 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 6 07:42:34.932528 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 6 07:42:34.932540 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 6 07:42:34.932550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 6 07:42:34.932560 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 6 07:42:34.932569 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 6 07:42:34.932578 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 6 07:42:34.932588 kernel: iommu: Default domain type: Translated Aug 6 07:42:34.932601 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 6 07:42:34.932611 kernel: PCI: Using ACPI for IRQ routing Aug 6 07:42:34.932620 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 6 07:42:34.932629 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 6 07:42:34.932638 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 6 07:42:34.932738 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 6 07:42:34.932834 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 6 07:42:34.932930 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 6 07:42:34.932947 kernel: vgaarb: loaded Aug 6 07:42:34.932956 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 6 07:42:34.932976 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 6 07:42:34.932985 kernel: clocksource: Switched to clocksource kvm-clock Aug 6 07:42:34.932994 kernel: VFS: Disk quotas dquot_6.6.0 Aug 6 07:42:34.933004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 6 07:42:34.933013 kernel: pnp: PnP ACPI init Aug 6 07:42:34.933022 kernel: pnp: PnP ACPI: found 4 devices Aug 6 07:42:34.933031 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 6 07:42:34.933094 kernel: NET: Registered PF_INET protocol family Aug 6 07:42:34.933108 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 6 07:42:34.933121 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 6 07:42:34.933134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 6 07:42:34.933144 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 6 07:42:34.933153 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 6 07:42:34.933162 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 6 07:42:34.933171 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:42:34.933181 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:42:34.933194 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 6 07:42:34.933204 kernel: NET: Registered PF_XDP protocol family Aug 6 07:42:34.933317 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 6 07:42:34.933404 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 6 07:42:34.933488 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 6 07:42:34.933580 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 6 07:42:34.933667 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 6 07:42:34.933771 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 6 07:42:34.933879 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 6 07:42:34.933893 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 6 07:42:34.937138 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 32235 usecs Aug 6 07:42:34.937181 kernel: PCI: CLS 0 bytes, default 64 Aug 6 07:42:34.937192 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 6 07:42:34.937202 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 6 07:42:34.937212 kernel: Initialise system trusted keyrings Aug 6 07:42:34.937221 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 6 07:42:34.937240 kernel: Key type asymmetric registered Aug 6 07:42:34.937249 kernel: Asymmetric key parser 'x509' registered Aug 6 07:42:34.937259 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 6 07:42:34.937268 kernel: io scheduler mq-deadline registered Aug 6 07:42:34.937277 kernel: io scheduler kyber registered Aug 6 07:42:34.937286 kernel: io scheduler bfq registered Aug 6 07:42:34.937295 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 6 07:42:34.937306 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 6 07:42:34.937315 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 6 07:42:34.937324 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 6 07:42:34.937337 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 6 07:42:34.937347 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 6 07:42:34.937356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 6 07:42:34.937365 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 6 07:42:34.937374 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 6 07:42:34.937533 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 6 07:42:34.937548 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 6 07:42:34.937639 kernel: rtc_cmos 00:03: registered as rtc0 Aug 6 07:42:34.937732 kernel: rtc_cmos 00:03: setting system clock to 2024-08-06T07:42:34 UTC (1722930154) Aug 6 07:42:34.937821 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 6 07:42:34.937832 kernel: intel_pstate: CPU model not supported Aug 6 07:42:34.937841 kernel: NET: Registered PF_INET6 protocol family Aug 6 07:42:34.937851 kernel: Segment Routing with IPv6 Aug 6 07:42:34.937860 kernel: In-situ OAM (IOAM) with IPv6 Aug 6 07:42:34.937869 kernel: NET: Registered PF_PACKET protocol family Aug 6 07:42:34.937878 kernel: Key type dns_resolver registered Aug 6 07:42:34.937890 kernel: IPI shorthand broadcast: enabled Aug 6 07:42:34.937900 kernel: sched_clock: Marking stable (977004053, 101783200)->(1189895683, -111108430) Aug 6 07:42:34.937909 kernel: registered taskstats version 1 Aug 6 07:42:34.937918 kernel: Loading compiled-in X.509 certificates Aug 6 07:42:34.937927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 6 07:42:34.937936 kernel: Key type .fscrypt registered Aug 6 07:42:34.937945 kernel: Key type fscrypt-provisioning registered Aug 6 07:42:34.937954 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 6 07:42:34.937976 kernel: ima: Allocated hash algorithm: sha1 Aug 6 07:42:34.937990 kernel: ima: No architecture policies found Aug 6 07:42:34.937999 kernel: clk: Disabling unused clocks Aug 6 07:42:34.938008 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 6 07:42:34.938017 kernel: Write protecting the kernel read-only data: 36864k Aug 6 07:42:34.938026 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 6 07:42:34.938059 kernel: Run /init as init process Aug 6 07:42:34.938072 kernel: with arguments: Aug 6 07:42:34.938082 kernel: /init Aug 6 07:42:34.938091 kernel: with environment: Aug 6 07:42:34.938103 kernel: HOME=/ Aug 6 07:42:34.938112 kernel: TERM=linux Aug 6 07:42:34.938121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 6 07:42:34.938135 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:42:34.938151 systemd[1]: Detected virtualization kvm. Aug 6 07:42:34.938161 systemd[1]: Detected architecture x86-64. Aug 6 07:42:34.938171 systemd[1]: Running in initrd. Aug 6 07:42:34.938180 systemd[1]: No hostname configured, using default hostname. Aug 6 07:42:34.938193 systemd[1]: Hostname set to . Aug 6 07:42:34.938203 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:42:34.938213 systemd[1]: Queued start job for default target initrd.target. Aug 6 07:42:34.938223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:42:34.938233 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:42:34.938243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 6 07:42:34.938253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:42:34.938263 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 6 07:42:34.938277 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 6 07:42:34.938289 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 6 07:42:34.938299 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 6 07:42:34.938309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:42:34.938319 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:42:34.938329 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:42:34.938343 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:42:34.938354 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:42:34.938364 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:42:34.938377 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:42:34.938388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:42:34.938398 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 6 07:42:34.938412 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 6 07:42:34.938422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:42:34.938432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:42:34.938442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:42:34.938452 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:42:34.938463 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 6 07:42:34.938473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:42:34.938483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 6 07:42:34.938496 systemd[1]: Starting systemd-fsck-usr.service... Aug 6 07:42:34.938506 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:42:34.938516 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:42:34.938526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:34.938536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 6 07:42:34.938579 systemd-journald[182]: Collecting audit messages is disabled. Aug 6 07:42:34.938609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:42:34.938620 systemd[1]: Finished systemd-fsck-usr.service. Aug 6 07:42:34.938630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 07:42:34.938646 systemd-journald[182]: Journal started Aug 6 07:42:34.938669 systemd-journald[182]: Runtime Journal (/run/log/journal/db2d8b615ca44d77abbb8421368c847f) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:42:34.942011 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:42:34.950072 systemd-modules-load[183]: Inserted module 'overlay' Aug 6 07:42:34.971434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:34.972223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:42:34.983393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:42:34.986817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:42:34.997298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 6 07:42:34.997337 kernel: Bridge firewalling registered Aug 6 07:42:34.995650 systemd-modules-load[183]: Inserted module 'br_netfilter' Aug 6 07:42:35.004249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:42:35.008666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:42:35.019296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:42:35.026881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:42:35.027593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:42:35.037986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:42:35.042241 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 6 07:42:35.055801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:42:35.060229 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:42:35.072713 dracut-cmdline[216]: dracut-dracut-053 Aug 6 07:42:35.079157 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:42:35.106858 systemd-resolved[220]: Positive Trust Anchors: Aug 6 07:42:35.106875 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:42:35.106911 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:42:35.112398 systemd-resolved[220]: Defaulting to hostname 'linux'. Aug 6 07:42:35.113655 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:42:35.114255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:42:35.187008 kernel: SCSI subsystem initialized Aug 6 07:42:35.200016 kernel: Loading iSCSI transport class v2.0-870. Aug 6 07:42:35.215000 kernel: iscsi: registered transport (tcp) Aug 6 07:42:35.241017 kernel: iscsi: registered transport (qla4xxx) Aug 6 07:42:35.241194 kernel: QLogic iSCSI HBA Driver Aug 6 07:42:35.299289 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 6 07:42:35.310313 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 6 07:42:35.341301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 6 07:42:35.341383 kernel: device-mapper: uevent: version 1.0.3 Aug 6 07:42:35.341398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 6 07:42:35.393126 kernel: raid6: avx2x4 gen() 20030 MB/s Aug 6 07:42:35.410040 kernel: raid6: avx2x2 gen() 22259 MB/s Aug 6 07:42:35.427205 kernel: raid6: avx2x1 gen() 20282 MB/s Aug 6 07:42:35.427284 kernel: raid6: using algorithm avx2x2 gen() 22259 MB/s Aug 6 07:42:35.445188 kernel: raid6: .... xor() 19313 MB/s, rmw enabled Aug 6 07:42:35.445270 kernel: raid6: using avx2x2 recovery algorithm Aug 6 07:42:35.478004 kernel: xor: automatically using best checksumming function avx Aug 6 07:42:35.667011 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 6 07:42:35.682219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:42:35.694315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:42:35.709422 systemd-udevd[402]: Using default interface naming scheme 'v255'. Aug 6 07:42:35.715113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:42:35.723334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 6 07:42:35.746018 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 6 07:42:35.790273 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:42:35.795276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:42:35.866438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:42:35.878651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 6 07:42:35.916102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 6 07:42:35.921148 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:42:35.922838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:42:35.924213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:42:35.932411 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 6 07:42:35.964011 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 6 07:42:36.068206 kernel: scsi host0: Virtio SCSI HBA Aug 6 07:42:36.068393 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 6 07:42:36.068509 kernel: cryptd: max_cpu_qlen set to 1000 Aug 6 07:42:36.068523 kernel: libata version 3.00 loaded. Aug 6 07:42:36.068548 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 6 07:42:36.068739 kernel: scsi host1: ata_piix Aug 6 07:42:36.068928 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 6 07:42:36.068944 kernel: GPT:9289727 != 125829119 Aug 6 07:42:36.068956 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 6 07:42:36.068983 kernel: GPT:9289727 != 125829119 Aug 6 07:42:36.068996 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 6 07:42:36.069009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:42:36.069030 kernel: scsi host2: ata_piix Aug 6 07:42:36.069259 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 6 07:42:36.069280 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 6 07:42:36.069298 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 6 07:42:36.087543 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Aug 6 07:42:36.087726 kernel: ACPI: bus type USB registered Aug 6 07:42:36.087743 kernel: usbcore: registered new interface driver usbfs Aug 6 07:42:36.087757 kernel: usbcore: registered new interface driver hub Aug 6 07:42:36.087781 kernel: usbcore: registered new device driver usb Aug 6 07:42:36.087795 kernel: AVX2 version of gcm_enc/dec engaged. Aug 6 07:42:36.087808 kernel: AES CTR mode by8 optimization enabled Aug 6 07:42:35.968026 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:42:36.058167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:42:36.059761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:42:36.060494 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:42:36.060851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:42:36.061018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:36.061489 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:36.070415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:36.132673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:36.139270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:42:36.158187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:42:36.248204 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 6 07:42:36.258394 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 6 07:42:36.258564 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 6 07:42:36.258682 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 6 07:42:36.258811 kernel: hub 1-0:1.0: USB hub found Aug 6 07:42:36.259035 kernel: hub 1-0:1.0: 2 ports detected Aug 6 07:42:36.259188 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Aug 6 07:42:36.254599 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 6 07:42:36.265059 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (450) Aug 6 07:42:36.266999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 6 07:42:36.279753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:42:36.284780 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 6 07:42:36.286000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 6 07:42:36.291268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 6 07:42:36.311465 disk-uuid[549]: Primary Header is updated. Aug 6 07:42:36.311465 disk-uuid[549]: Secondary Entries is updated. Aug 6 07:42:36.311465 disk-uuid[549]: Secondary Header is updated. Aug 6 07:42:36.325011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:42:36.334008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:42:36.345999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:42:37.343066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:42:37.344146 disk-uuid[550]: The operation has completed successfully. Aug 6 07:42:37.395687 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 6 07:42:37.395802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 6 07:42:37.402223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 6 07:42:37.407566 sh[565]: Success Aug 6 07:42:37.423993 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 6 07:42:37.492931 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 6 07:42:37.495122 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 6 07:42:37.498735 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 6 07:42:37.529371 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 6 07:42:37.529470 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:42:37.529493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 6 07:42:37.529512 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 6 07:42:37.529991 kernel: BTRFS info (device dm-0): using free space tree Aug 6 07:42:37.538548 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 6 07:42:37.539653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 6 07:42:37.547790 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 6 07:42:37.551332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 6 07:42:37.564114 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:42:37.564189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:42:37.564224 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:42:37.571002 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:42:37.582671 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 6 07:42:37.584143 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:42:37.591123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 6 07:42:37.597246 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 6 07:42:37.718868 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:42:37.728286 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:42:37.753608 systemd-networkd[749]: lo: Link UP Aug 6 07:42:37.753620 systemd-networkd[749]: lo: Gained carrier Aug 6 07:42:37.757342 systemd-networkd[749]: Enumeration completed Aug 6 07:42:37.757843 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:42:37.758346 systemd[1]: Reached target network.target - Network. Aug 6 07:42:37.759119 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:42:37.759122 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 6 07:42:37.760521 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:42:37.760525 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 6 07:42:37.761395 systemd-networkd[749]: eth0: Link UP Aug 6 07:42:37.761400 systemd-networkd[749]: eth0: Gained carrier Aug 6 07:42:37.761408 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:42:37.765594 ignition[653]: Ignition 2.19.0 Aug 6 07:42:37.766227 systemd-networkd[749]: eth1: Link UP Aug 6 07:42:37.765602 ignition[653]: Stage: fetch-offline Aug 6 07:42:37.766231 systemd-networkd[749]: eth1: Gained carrier Aug 6 07:42:37.765656 ignition[653]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:37.766245 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:42:37.765670 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:37.765802 ignition[653]: parsed url from cmdline: "" Aug 6 07:42:37.765808 ignition[653]: no config URL provided Aug 6 07:42:37.765818 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:42:37.765834 ignition[653]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:42:37.765844 ignition[653]: failed to fetch config: resource requires networking Aug 6 07:42:37.770141 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:42:37.768587 ignition[653]: Ignition finished successfully Aug 6 07:42:37.777287 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 6 07:42:37.779048 systemd-networkd[749]: eth0: DHCPv4 address 64.23.156.122/20, gateway 64.23.144.1 acquired from 169.254.169.253 Aug 6 07:42:37.785221 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Aug 6 07:42:37.804746 ignition[757]: Ignition 2.19.0 Aug 6 07:42:37.804758 ignition[757]: Stage: fetch Aug 6 07:42:37.805027 ignition[757]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:37.805044 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:37.805263 ignition[757]: parsed url from cmdline: "" Aug 6 07:42:37.805268 ignition[757]: no config URL provided Aug 6 07:42:37.805274 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:42:37.805283 ignition[757]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:42:37.805304 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 6 07:42:37.826918 ignition[757]: GET result: OK Aug 6 07:42:37.827105 ignition[757]: parsing config with SHA512: 507b6ce797915359a7bf8409afe161b3a232de265fb22c1ca3fb106c138d96d590dc8b5976f378865883232c237f8d975855ce33f83e186adc92044a8b141502 Aug 6 07:42:37.833810 unknown[757]: fetched base config from "system" Aug 6 07:42:37.833826 unknown[757]: fetched base config from "system" Aug 6 07:42:37.833836 unknown[757]: fetched user config from "digitalocean" Aug 6 07:42:37.834798 ignition[757]: fetch: fetch complete Aug 6 07:42:37.834812 ignition[757]: fetch: fetch passed Aug 6 07:42:37.834891 ignition[757]: Ignition finished successfully Aug 6 07:42:37.836658 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 6 07:42:37.844236 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 6 07:42:37.862902 ignition[765]: Ignition 2.19.0 Aug 6 07:42:37.862915 ignition[765]: Stage: kargs Aug 6 07:42:37.863126 ignition[765]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:37.863135 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:37.864330 ignition[765]: kargs: kargs passed Aug 6 07:42:37.866036 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 6 07:42:37.864398 ignition[765]: Ignition finished successfully Aug 6 07:42:37.871508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 6 07:42:37.899767 ignition[772]: Ignition 2.19.0 Aug 6 07:42:37.899781 ignition[772]: Stage: disks Aug 6 07:42:37.899984 ignition[772]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:37.899994 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:37.902298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 6 07:42:37.900986 ignition[772]: disks: disks passed Aug 6 07:42:37.903688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 6 07:42:37.901132 ignition[772]: Ignition finished successfully Aug 6 07:42:37.904143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 6 07:42:37.907887 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:42:37.908256 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:42:37.909142 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:42:37.915275 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 6 07:42:37.935351 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 6 07:42:37.937950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 6 07:42:37.942295 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 6 07:42:38.097001 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 6 07:42:38.097681 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 6 07:42:38.099138 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 6 07:42:38.112200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:42:38.115126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 6 07:42:38.118240 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 6 07:42:38.124044 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (790) Aug 6 07:42:38.127478 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:42:38.127569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:42:38.127591 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:42:38.132168 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 6 07:42:38.135551 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 6 07:42:38.135595 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:42:38.139578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 6 07:42:38.153407 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:42:38.143769 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:42:38.154234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 6 07:42:38.217009 coreos-metadata[793]: Aug 06 07:42:38.216 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:42:38.223602 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Aug 6 07:42:38.227131 coreos-metadata[793]: Aug 06 07:42:38.227 INFO Fetch successful Aug 6 07:42:38.230088 coreos-metadata[792]: Aug 06 07:42:38.230 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:42:38.232627 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Aug 6 07:42:38.235144 coreos-metadata[793]: Aug 06 07:42:38.235 INFO wrote hostname ci-4012.1.0-5-8b675ffd7f to /sysroot/etc/hostname Aug 6 07:42:38.237278 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:42:38.239647 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Aug 6 07:42:38.240952 coreos-metadata[792]: Aug 06 07:42:38.239 INFO Fetch successful Aug 6 07:42:38.246632 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 6 07:42:38.246754 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 6 07:42:38.248593 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Aug 6 07:42:38.358029 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 6 07:42:38.364182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 6 07:42:38.368234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 6 07:42:38.380002 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:42:38.408204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 6 07:42:38.412613 ignition[911]: INFO : Ignition 2.19.0 Aug 6 07:42:38.412613 ignition[911]: INFO : Stage: mount Aug 6 07:42:38.413805 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:38.413805 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:38.415551 ignition[911]: INFO : mount: mount passed Aug 6 07:42:38.415551 ignition[911]: INFO : Ignition finished successfully Aug 6 07:42:38.415117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 6 07:42:38.425754 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 6 07:42:38.526630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 6 07:42:38.533306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:42:38.554002 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Aug 6 07:42:38.556834 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:42:38.556897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:42:38.556910 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:42:38.560995 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:42:38.562586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:42:38.606332 ignition[940]: INFO : Ignition 2.19.0 Aug 6 07:42:38.606332 ignition[940]: INFO : Stage: files Aug 6 07:42:38.607518 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:38.607518 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:38.607518 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Aug 6 07:42:38.609179 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 6 07:42:38.609179 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 6 07:42:38.611628 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 6 07:42:38.612186 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 6 07:42:38.612723 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 6 07:42:38.612221 unknown[940]: wrote ssh authorized keys file for user: core Aug 6 07:42:38.614377 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 6 07:42:38.615144 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 6 07:42:38.615144 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:42:38.615144 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 6 07:42:38.637156 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 6 07:42:38.690172 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:42:38.691112 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 6 07:42:38.691112 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 6 07:42:39.152817 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 6 07:42:39.221686 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 6 07:42:39.221686 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:42:39.222954 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:42:39.226891 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 6 07:42:39.366232 systemd-networkd[749]: eth1: Gained IPv6LL Aug 6 07:42:39.430222 systemd-networkd[749]: eth0: Gained IPv6LL Aug 6 07:42:39.603025 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 6 07:42:39.829152 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:42:39.829152 ignition[940]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 6 07:42:39.830934 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:42:39.830934 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:42:39.830934 ignition[940]: INFO : files: files passed Aug 6 07:42:39.840578 ignition[940]: INFO : Ignition finished successfully Aug 6 07:42:39.834063 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 6 07:42:39.841283 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 6 07:42:39.844192 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 6 07:42:39.848014 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 6 07:42:39.848132 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 6 07:42:39.873042 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:42:39.873042 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:42:39.875623 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:42:39.877879 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:42:39.878565 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 6 07:42:39.884353 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 6 07:42:39.923810 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 6 07:42:39.923933 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 6 07:42:39.925360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 6 07:42:39.925775 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 6 07:42:39.926526 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 6 07:42:39.931214 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 6 07:42:39.951332 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:42:39.956221 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 6 07:42:39.979986 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:42:39.980540 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:42:39.981179 systemd[1]: Stopped target timers.target - Timer Units. Aug 6 07:42:39.982043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 6 07:42:39.982202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:42:39.983098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 6 07:42:39.983917 systemd[1]: Stopped target basic.target - Basic System. Aug 6 07:42:39.984680 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 6 07:42:39.985407 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:42:39.986119 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 6 07:42:39.986805 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 6 07:42:39.987461 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:42:39.988194 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 6 07:42:39.988911 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 6 07:42:39.989682 systemd[1]: Stopped target swap.target - Swaps. Aug 6 07:42:39.990259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 6 07:42:39.990391 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:42:39.991364 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:42:39.991888 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:42:39.992492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 6 07:42:39.992604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:42:39.993201 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 6 07:42:39.993350 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 6 07:42:39.994355 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 6 07:42:39.994511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:42:39.995416 systemd[1]: ignition-files.service: Deactivated successfully. Aug 6 07:42:39.995568 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 6 07:42:39.996058 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 6 07:42:39.996157 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:42:40.003423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 6 07:42:40.007324 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 6 07:42:40.007751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 6 07:42:40.007927 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:42:40.008511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 6 07:42:40.008644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:42:40.014501 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 6 07:42:40.015941 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 6 07:42:40.035996 ignition[993]: INFO : Ignition 2.19.0 Aug 6 07:42:40.037343 ignition[993]: INFO : Stage: umount Aug 6 07:42:40.037927 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:42:40.037927 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:42:40.040469 ignition[993]: INFO : umount: umount passed Aug 6 07:42:40.040469 ignition[993]: INFO : Ignition finished successfully Aug 6 07:42:40.043287 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 6 07:42:40.043467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 6 07:42:40.044238 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 6 07:42:40.044295 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 6 07:42:40.044847 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 6 07:42:40.044916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 6 07:42:40.046813 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 6 07:42:40.046869 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 6 07:42:40.047640 systemd[1]: Stopped target network.target - Network. Aug 6 07:42:40.048347 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 6 07:42:40.048492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:42:40.051651 systemd[1]: Stopped target paths.target - Path Units. Aug 6 07:42:40.052342 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 6 07:42:40.054230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:42:40.054727 systemd[1]: Stopped target slices.target - Slice Units. Aug 6 07:42:40.069639 systemd[1]: Stopped target sockets.target - Socket Units. Aug 6 07:42:40.075285 systemd[1]: iscsid.socket: Deactivated successfully. Aug 6 07:42:40.075352 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:42:40.075771 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 6 07:42:40.075827 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:42:40.081961 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 6 07:42:40.082101 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 6 07:42:40.086204 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 6 07:42:40.086311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 6 07:42:40.091823 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 6 07:42:40.092260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 6 07:42:40.094592 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 6 07:42:40.095402 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 6 07:42:40.095523 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 6 07:42:40.097691 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 6 07:42:40.098708 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 6 07:42:40.099073 systemd-networkd[749]: eth1: DHCPv6 lease lost Aug 6 07:42:40.103063 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 6 07:42:40.103091 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 6 07:42:40.103601 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 6 07:42:40.104695 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 6 07:42:40.104748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:42:40.105670 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 6 07:42:40.105792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 6 07:42:40.107625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 6 07:42:40.107746 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:42:40.114150 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 6 07:42:40.114535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 6 07:42:40.114609 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:42:40.115091 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 07:42:40.115136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:42:40.116231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 6 07:42:40.116286 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 6 07:42:40.116767 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:42:40.131707 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 6 07:42:40.132478 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 6 07:42:40.134445 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 6 07:42:40.134650 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:42:40.136585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 6 07:42:40.136689 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 6 07:42:40.137777 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 6 07:42:40.137836 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:42:40.138535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 6 07:42:40.138604 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:42:40.139680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 6 07:42:40.139751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 6 07:42:40.140886 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:42:40.140956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:42:40.153502 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 6 07:42:40.154591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 6 07:42:40.154691 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:42:40.155236 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 6 07:42:40.155303 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:42:40.155812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 6 07:42:40.155875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:42:40.156408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:42:40.156471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:40.162773 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 6 07:42:40.162885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 6 07:42:40.163894 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 6 07:42:40.176306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 6 07:42:40.186672 systemd[1]: Switching root. Aug 6 07:42:40.231072 systemd-journald[182]: Journal stopped Aug 6 07:42:41.368216 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Aug 6 07:42:41.368319 kernel: SELinux: policy capability network_peer_controls=1 Aug 6 07:42:41.368343 kernel: SELinux: policy capability open_perms=1 Aug 6 07:42:41.368370 kernel: SELinux: policy capability extended_socket_class=1 Aug 6 07:42:41.368395 kernel: SELinux: policy capability always_check_network=0 Aug 6 07:42:41.368415 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 6 07:42:41.368435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 6 07:42:41.368455 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 6 07:42:41.368483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 6 07:42:41.368503 systemd[1]: Successfully loaded SELinux policy in 44.314ms. Aug 6 07:42:41.368542 kernel: audit: type=1403 audit(1722930160.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 6 07:42:41.368562 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.211ms. Aug 6 07:42:41.368584 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:42:41.368605 systemd[1]: Detected virtualization kvm. Aug 6 07:42:41.368626 systemd[1]: Detected architecture x86-64. Aug 6 07:42:41.368647 systemd[1]: Detected first boot. Aug 6 07:42:41.368674 systemd[1]: Hostname set to . Aug 6 07:42:41.368695 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:42:41.368717 zram_generator::config[1052]: No configuration found. Aug 6 07:42:41.368742 systemd[1]: Populated /etc with preset unit settings. Aug 6 07:42:41.368764 systemd[1]: Queued start job for default target multi-user.target. Aug 6 07:42:41.368786 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 6 07:42:41.368810 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 6 07:42:41.368832 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 6 07:42:41.368857 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 6 07:42:41.368879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 6 07:42:41.368900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 6 07:42:41.368921 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 6 07:42:41.368949 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 6 07:42:41.369002 systemd[1]: Created slice user.slice - User and Session Slice. Aug 6 07:42:41.369026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:42:41.369048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:42:41.369080 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 6 07:42:41.369108 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 6 07:42:41.369130 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 6 07:42:41.369150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:42:41.369171 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 6 07:42:41.369192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:42:41.369214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 6 07:42:41.369236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:42:41.369260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:42:41.369286 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:42:41.369308 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:42:41.369330 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 6 07:42:41.369351 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 6 07:42:41.369373 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 6 07:42:41.369395 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 6 07:42:41.369418 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:42:41.369440 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:42:41.369465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:42:41.369487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 6 07:42:41.369509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 6 07:42:41.369531 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 6 07:42:41.369553 systemd[1]: Mounting media.mount - External Media Directory... Aug 6 07:42:41.369575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:41.369597 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 6 07:42:41.369619 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 6 07:42:41.369641 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 6 07:42:41.369666 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 6 07:42:41.369689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:42:41.369709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:42:41.369733 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 6 07:42:41.369756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:42:41.369778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:42:41.369801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:42:41.369831 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 6 07:42:41.369857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:42:41.369879 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 6 07:42:41.369907 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 6 07:42:41.369930 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 6 07:42:41.369952 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:42:41.371240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:42:41.371282 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 6 07:42:41.371306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 6 07:42:41.371337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:42:41.371373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:41.371395 kernel: loop: module loaded Aug 6 07:42:41.371417 kernel: fuse: init (API version 7.39) Aug 6 07:42:41.371438 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 6 07:42:41.371459 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 6 07:42:41.371534 systemd-journald[1146]: Collecting audit messages is disabled. Aug 6 07:42:41.371578 systemd[1]: Mounted media.mount - External Media Directory. Aug 6 07:42:41.371604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 6 07:42:41.371625 systemd-journald[1146]: Journal started Aug 6 07:42:41.371667 systemd-journald[1146]: Runtime Journal (/run/log/journal/db2d8b615ca44d77abbb8421368c847f) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:42:41.379040 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:42:41.381470 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 6 07:42:41.382062 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 6 07:42:41.383541 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:42:41.384239 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 6 07:42:41.384412 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 6 07:42:41.385112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:42:41.385285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:42:41.385880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:42:41.386088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:42:41.386804 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 6 07:42:41.387062 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 6 07:42:41.387834 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:42:41.388014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:42:41.389687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:42:41.390421 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 6 07:42:41.392492 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 6 07:42:41.400790 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 6 07:42:41.406094 kernel: ACPI: bus type drm_connector registered Aug 6 07:42:41.405562 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:42:41.408253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:42:41.421521 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 6 07:42:41.427133 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 6 07:42:41.436674 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 6 07:42:41.437157 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 6 07:42:41.443231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 6 07:42:41.463234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 6 07:42:41.465130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:42:41.472495 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 6 07:42:41.473229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:42:41.478727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:42:41.492227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 07:42:41.495952 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 6 07:42:41.499992 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 6 07:42:41.515451 systemd-journald[1146]: Time spent on flushing to /var/log/journal/db2d8b615ca44d77abbb8421368c847f is 37.596ms for 980 entries. Aug 6 07:42:41.515451 systemd-journald[1146]: System Journal (/var/log/journal/db2d8b615ca44d77abbb8421368c847f) is 8.0M, max 195.6M, 187.6M free. Aug 6 07:42:41.578122 systemd-journald[1146]: Received client request to flush runtime journal. Aug 6 07:42:41.541682 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 6 07:42:41.544511 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 6 07:42:41.582613 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 6 07:42:41.594602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:42:41.631237 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Aug 6 07:42:41.631270 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Aug 6 07:42:41.634124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:42:41.647217 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 6 07:42:41.659659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:42:41.681330 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 6 07:42:41.697225 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 6 07:42:41.732793 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 6 07:42:41.744450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:42:41.774634 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 6 07:42:41.774665 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Aug 6 07:42:41.784611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:42:42.233051 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 6 07:42:42.239211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:42:42.282589 systemd-udevd[1226]: Using default interface naming scheme 'v255'. Aug 6 07:42:42.307929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:42:42.315876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:42:42.341211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 6 07:42:42.372908 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 6 07:42:42.394327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:42.395022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:42:42.401009 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1230) Aug 6 07:42:42.402272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:42:42.411134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:42:42.420331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:42:42.422089 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 6 07:42:42.422142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 6 07:42:42.422194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:42.422642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:42:42.422871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:42:42.431404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:42:42.431585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:42:42.454674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:42:42.464387 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:42:42.464597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:42:42.465188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:42:42.473456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 6 07:42:42.476993 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 6 07:42:42.487252 kernel: ACPI: button: Power Button [PWRF] Aug 6 07:42:42.512037 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 6 07:42:42.535023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1231) Aug 6 07:42:42.558037 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 6 07:42:42.600647 systemd-networkd[1229]: lo: Link UP Aug 6 07:42:42.601109 systemd-networkd[1229]: lo: Gained carrier Aug 6 07:42:42.604154 systemd-networkd[1229]: Enumeration completed Aug 6 07:42:42.604482 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:42:42.605446 systemd-networkd[1229]: eth0: Configuring with /run/systemd/network/10-96:48:00:ac:84:e2.network. Aug 6 07:42:42.608222 systemd-networkd[1229]: eth1: Configuring with /run/systemd/network/10-56:20:34:24:1c:38.network. Aug 6 07:42:42.608774 systemd-networkd[1229]: eth0: Link UP Aug 6 07:42:42.608824 systemd-networkd[1229]: eth0: Gained carrier Aug 6 07:42:42.613284 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 6 07:42:42.616333 systemd-networkd[1229]: eth1: Link UP Aug 6 07:42:42.616342 systemd-networkd[1229]: eth1: Gained carrier Aug 6 07:42:42.670234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:42:42.682016 kernel: mousedev: PS/2 mouse device common for all mice Aug 6 07:42:42.681437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:42.729004 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 6 07:42:42.729112 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 6 07:42:42.735015 kernel: Console: switching to colour dummy device 80x25 Aug 6 07:42:42.737140 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 6 07:42:42.737204 kernel: [drm] features: -context_init Aug 6 07:42:42.737219 kernel: [drm] number of scanouts: 1 Aug 6 07:42:42.737233 kernel: [drm] number of cap sets: 0 Aug 6 07:42:42.746991 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 6 07:42:42.753988 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 6 07:42:42.754068 kernel: Console: switching to colour frame buffer device 128x48 Aug 6 07:42:42.760630 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 6 07:42:42.763426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:42.778544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:42:42.778800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:42.779874 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:42.797396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:42:42.898052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:42:42.911740 kernel: EDAC MC: Ver: 3.0.0 Aug 6 07:42:42.937509 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 6 07:42:42.943212 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 6 07:42:42.967915 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:42:43.000568 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 6 07:42:43.001825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:42:43.011340 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 6 07:42:43.019675 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:42:43.049328 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 6 07:42:43.050580 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 6 07:42:43.063207 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 6 07:42:43.063397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 6 07:42:43.063446 systemd[1]: Reached target machines.target - Containers. Aug 6 07:42:43.066245 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 6 07:42:43.089205 kernel: ISO 9660 Extensions: RRIP_1991A Aug 6 07:42:43.090570 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 6 07:42:43.093402 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:42:43.095851 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 6 07:42:43.101390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 6 07:42:43.104766 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 6 07:42:43.108350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:42:43.113339 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 6 07:42:43.126318 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 6 07:42:43.131177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 6 07:42:43.133697 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 6 07:42:43.148056 kernel: loop0: detected capacity change from 0 to 209816 Aug 6 07:42:43.153600 kernel: block loop0: the capability attribute has been deprecated. Aug 6 07:42:43.152639 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 6 07:42:43.159412 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 6 07:42:43.185999 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 6 07:42:43.217193 kernel: loop1: detected capacity change from 0 to 139760 Aug 6 07:42:43.261012 kernel: loop2: detected capacity change from 0 to 8 Aug 6 07:42:43.287120 kernel: loop3: detected capacity change from 0 to 80568 Aug 6 07:42:43.329108 kernel: loop4: detected capacity change from 0 to 209816 Aug 6 07:42:43.347780 kernel: loop5: detected capacity change from 0 to 139760 Aug 6 07:42:43.367081 kernel: loop6: detected capacity change from 0 to 8 Aug 6 07:42:43.370045 kernel: loop7: detected capacity change from 0 to 80568 Aug 6 07:42:43.385935 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 6 07:42:43.386564 (sd-merge)[1323]: Merged extensions into '/usr'. Aug 6 07:42:43.391723 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Aug 6 07:42:43.391742 systemd[1]: Reloading... Aug 6 07:42:43.432233 zram_generator::config[1346]: No configuration found. Aug 6 07:42:43.699429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:42:43.705002 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 6 07:42:43.770613 systemd[1]: Reloading finished in 378 ms. Aug 6 07:42:43.791227 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 6 07:42:43.794519 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 6 07:42:43.805280 systemd[1]: Starting ensure-sysext.service... Aug 6 07:42:43.809219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:42:43.818317 systemd[1]: Reloading requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Aug 6 07:42:43.818341 systemd[1]: Reloading... Aug 6 07:42:43.853805 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 6 07:42:43.854781 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 6 07:42:43.855984 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 6 07:42:43.856475 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Aug 6 07:42:43.858053 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Aug 6 07:42:43.861392 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:42:43.861557 systemd-tmpfiles[1400]: Skipping /boot Aug 6 07:42:43.875801 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:42:43.875993 systemd-tmpfiles[1400]: Skipping /boot Aug 6 07:42:43.934154 zram_generator::config[1427]: No configuration found. Aug 6 07:42:44.096853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:42:44.172202 systemd[1]: Reloading finished in 353 ms. Aug 6 07:42:44.190724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:42:44.207273 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:42:44.224319 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 6 07:42:44.229251 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 6 07:42:44.241253 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:42:44.257429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 6 07:42:44.267148 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.268310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:42:44.275102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:42:44.287308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:42:44.306314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:42:44.306940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:42:44.307131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.311132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.313137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:42:44.313469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:42:44.314078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.333510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:42:44.333690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:42:44.340191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.341751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:42:44.350311 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:42:44.351034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:42:44.351231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:42:44.352814 systemd[1]: Finished ensure-sysext.service. Aug 6 07:42:44.355631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:42:44.355821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:42:44.359704 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:42:44.359926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:42:44.360010 systemd-networkd[1229]: eth1: Gained IPv6LL Aug 6 07:42:44.369442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 6 07:42:44.381352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 6 07:42:44.383379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 6 07:42:44.393585 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 6 07:42:44.396851 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:42:44.397054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:42:44.410572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:42:44.410662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:42:44.424773 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 6 07:42:44.433281 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 6 07:42:44.435870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:42:44.438011 augenrules[1523]: No rules Aug 6 07:42:44.441836 systemd-resolved[1482]: Positive Trust Anchors: Aug 6 07:42:44.441845 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:42:44.441882 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:42:44.446717 systemd-resolved[1482]: Using system hostname 'ci-4012.1.0-5-8b675ffd7f'. Aug 6 07:42:44.447295 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:42:44.451924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:42:44.452919 systemd[1]: Reached target network.target - Network. Aug 6 07:42:44.457669 systemd[1]: Reached target network-online.target - Network is Online. Aug 6 07:42:44.458210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:42:44.466766 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 6 07:42:44.486129 systemd-networkd[1229]: eth0: Gained IPv6LL Aug 6 07:42:44.523401 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 6 07:42:44.525267 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:42:44.525862 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 6 07:42:44.527842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 6 07:42:44.529230 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 6 07:42:44.529717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 6 07:42:44.529756 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:42:44.531506 systemd[1]: Reached target time-set.target - System Time Set. Aug 6 07:42:44.532736 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 6 07:42:44.533434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 6 07:42:44.533832 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:42:45.486275 systemd-resolved[1482]: Clock change detected. Flushing caches. Aug 6 07:42:45.487077 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 6 07:42:45.487287 systemd-timesyncd[1520]: Contacted time server 64.142.54.13:123 (0.flatcar.pool.ntp.org). Aug 6 07:42:45.487346 systemd-timesyncd[1520]: Initial clock synchronization to Tue 2024-08-06 07:42:45.486191 UTC. Aug 6 07:42:45.490380 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 6 07:42:45.496450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 6 07:42:45.498860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 6 07:42:45.500655 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:42:45.502070 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:42:45.503557 systemd[1]: System is tainted: cgroupsv1 Aug 6 07:42:45.503650 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:42:45.503682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:42:45.510893 systemd[1]: Starting containerd.service - containerd container runtime... Aug 6 07:42:45.517608 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 6 07:42:45.533995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 6 07:42:45.539743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 6 07:42:45.554770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 6 07:42:45.556334 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 6 07:42:45.566184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:42:45.567386 jq[1541]: false Aug 6 07:42:45.572723 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 6 07:42:45.583855 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 6 07:42:45.595284 coreos-metadata[1536]: Aug 06 07:42:45.595 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:42:45.595401 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 6 07:42:45.603390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 6 07:42:45.606095 dbus-daemon[1538]: [system] SELinux support is enabled Aug 6 07:42:45.616504 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 6 07:42:45.624633 coreos-metadata[1536]: Aug 06 07:42:45.623 INFO Fetch successful Aug 6 07:42:45.628669 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 6 07:42:45.630197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 6 07:42:45.636901 systemd[1]: Starting update-engine.service - Update Engine... Aug 6 07:42:45.642177 extend-filesystems[1542]: Found loop4 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found loop5 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found loop6 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found loop7 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda1 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda2 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda3 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found usr Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda4 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda6 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda7 Aug 6 07:42:45.643109 extend-filesystems[1542]: Found vda9 Aug 6 07:42:45.643109 extend-filesystems[1542]: Checking size of /dev/vda9 Aug 6 07:42:45.653920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 6 07:42:45.657832 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 6 07:42:45.729326 update_engine[1554]: I0806 07:42:45.700019 1554 main.cc:92] Flatcar Update Engine starting Aug 6 07:42:45.729326 update_engine[1554]: I0806 07:42:45.724290 1554 update_check_scheduler.cc:74] Next update check in 10m5s Aug 6 07:42:45.681001 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 6 07:42:45.731483 jq[1556]: true Aug 6 07:42:45.686316 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 6 07:42:45.706286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 6 07:42:45.706886 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 6 07:42:45.735598 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 6 07:42:45.745116 systemd[1]: motdgen.service: Deactivated successfully. Aug 6 07:42:45.745364 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 6 07:42:45.769636 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1233) Aug 6 07:42:45.772616 extend-filesystems[1542]: Resized partition /dev/vda9 Aug 6 07:42:45.783319 extend-filesystems[1589]: resize2fs 1.47.0 (5-Feb-2023) Aug 6 07:42:45.785547 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 6 07:42:45.802617 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 6 07:42:45.829781 jq[1583]: true Aug 6 07:42:45.844542 systemd[1]: Started update-engine.service - Update Engine. Aug 6 07:42:45.846273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 6 07:42:45.846303 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 6 07:42:45.846782 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 6 07:42:45.846868 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 6 07:42:45.846884 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 6 07:42:45.850076 tar[1575]: linux-amd64/helm Aug 6 07:42:45.852724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 6 07:42:45.855249 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 6 07:42:45.858391 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 6 07:42:45.863668 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 6 07:42:46.043617 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 6 07:42:46.050011 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 6 07:42:46.091861 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:42:46.075901 systemd[1]: Starting sshkeys.service... Aug 6 07:42:46.107329 extend-filesystems[1589]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 6 07:42:46.107329 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 6 07:42:46.107329 extend-filesystems[1589]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 6 07:42:46.103075 systemd-logind[1553]: New seat seat0. Aug 6 07:42:46.122006 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Aug 6 07:42:46.122006 extend-filesystems[1542]: Found vdb Aug 6 07:42:46.103677 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 6 07:42:46.104050 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 6 07:42:46.104359 systemd-logind[1553]: Watching system buttons on /dev/input/event1 (Power Button) Aug 6 07:42:46.104384 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 6 07:42:46.112165 systemd[1]: Started systemd-logind.service - User Login Management. Aug 6 07:42:46.158522 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 6 07:42:46.190591 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 6 07:42:46.282988 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 6 07:42:46.336842 coreos-metadata[1638]: Aug 06 07:42:46.334 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:42:46.355240 coreos-metadata[1638]: Aug 06 07:42:46.354 INFO Fetch successful Aug 6 07:42:46.363751 unknown[1638]: wrote ssh authorized keys file for user: core Aug 6 07:42:46.399606 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:42:46.396164 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 6 07:42:46.403641 systemd[1]: Finished sshkeys.service. Aug 6 07:42:46.524158 containerd[1585]: time="2024-08-06T07:42:46.522702127Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 6 07:42:46.585446 containerd[1585]: time="2024-08-06T07:42:46.585381582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 6 07:42:46.585446 containerd[1585]: time="2024-08-06T07:42:46.585438062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.590188604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.591637857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.591996790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.592032838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.592122772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.592172665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.592184993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592282 containerd[1585]: time="2024-08-06T07:42:46.592248420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592577 containerd[1585]: time="2024-08-06T07:42:46.592463281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.592577 containerd[1585]: time="2024-08-06T07:42:46.592482566Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 6 07:42:46.592577 containerd[1585]: time="2024-08-06T07:42:46.592492942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:42:46.593017 containerd[1585]: time="2024-08-06T07:42:46.592689712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:42:46.593017 containerd[1585]: time="2024-08-06T07:42:46.592709941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 6 07:42:46.593017 containerd[1585]: time="2024-08-06T07:42:46.592820510Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 6 07:42:46.593017 containerd[1585]: time="2024-08-06T07:42:46.592849263Z" level=info msg="metadata content store policy set" policy=shared Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598395112Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598449377Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598465335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598503429Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598517317Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598528703Z" level=info msg="NRI interface is disabled by configuration." Aug 6 07:42:46.598549 containerd[1585]: time="2024-08-06T07:42:46.598541270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598722758Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598739312Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598771686Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598786385Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598801991Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598822498Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598835970Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598847361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598860254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598873486Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598885885Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598897251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 6 07:42:46.600642 containerd[1585]: time="2024-08-06T07:42:46.598998766Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599337061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599363781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599377510Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599400621Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599452072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599467851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599479378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599490170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599502233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599515287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599527783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.601038 containerd[1585]: time="2024-08-06T07:42:46.599540650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.602713 containerd[1585]: time="2024-08-06T07:42:46.599569862Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 6 07:42:46.603068 containerd[1585]: time="2024-08-06T07:42:46.603031940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603125 containerd[1585]: time="2024-08-06T07:42:46.603082135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603125 containerd[1585]: time="2024-08-06T07:42:46.603104936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603169 containerd[1585]: time="2024-08-06T07:42:46.603125013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603169 containerd[1585]: time="2024-08-06T07:42:46.603143532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603211 containerd[1585]: time="2024-08-06T07:42:46.603168413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603211 containerd[1585]: time="2024-08-06T07:42:46.603185250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.603211 containerd[1585]: time="2024-08-06T07:42:46.603200367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 6 07:42:46.607869 containerd[1585]: time="2024-08-06T07:42:46.607681610Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 6 07:42:46.607869 containerd[1585]: time="2024-08-06T07:42:46.607795858Z" level=info msg="Connect containerd service" Aug 6 07:42:46.607869 containerd[1585]: time="2024-08-06T07:42:46.607848864Z" level=info msg="using legacy CRI server" Aug 6 07:42:46.607869 containerd[1585]: time="2024-08-06T07:42:46.607856196Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 6 07:42:46.609010 containerd[1585]: time="2024-08-06T07:42:46.607967569Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 6 07:42:46.609010 containerd[1585]: time="2024-08-06T07:42:46.608719394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 07:42:46.609010 containerd[1585]: time="2024-08-06T07:42:46.608771762Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 6 07:42:46.609010 containerd[1585]: time="2024-08-06T07:42:46.608792220Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 6 07:42:46.610621 containerd[1585]: time="2024-08-06T07:42:46.608803389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 6 07:42:46.610713 containerd[1585]: time="2024-08-06T07:42:46.610633100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 6 07:42:46.610713 containerd[1585]: time="2024-08-06T07:42:46.608827447Z" level=info msg="Start subscribing containerd event" Aug 6 07:42:46.610775 containerd[1585]: time="2024-08-06T07:42:46.610717380Z" level=info msg="Start recovering state" Aug 6 07:42:46.610798 containerd[1585]: time="2024-08-06T07:42:46.610790872Z" level=info msg="Start event monitor" Aug 6 07:42:46.610820 containerd[1585]: time="2024-08-06T07:42:46.610801420Z" level=info msg="Start snapshots syncer" Aug 6 07:42:46.610820 containerd[1585]: time="2024-08-06T07:42:46.610810052Z" level=info msg="Start cni network conf syncer for default" Aug 6 07:42:46.610820 containerd[1585]: time="2024-08-06T07:42:46.610818642Z" level=info msg="Start streaming server" Aug 6 07:42:46.612135 containerd[1585]: time="2024-08-06T07:42:46.611377157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 6 07:42:46.615383 containerd[1585]: time="2024-08-06T07:42:46.612341801Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 6 07:42:46.615383 containerd[1585]: time="2024-08-06T07:42:46.612424875Z" level=info msg="containerd successfully booted in 0.094384s" Aug 6 07:42:46.613017 systemd[1]: Started containerd.service - containerd container runtime. Aug 6 07:42:46.751448 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 6 07:42:46.804984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 6 07:42:46.820328 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 6 07:42:46.844469 systemd[1]: issuegen.service: Deactivated successfully. Aug 6 07:42:46.844786 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 6 07:42:46.855047 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 6 07:42:46.886717 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 6 07:42:46.899019 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 6 07:42:46.911256 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 6 07:42:46.913235 systemd[1]: Reached target getty.target - Login Prompts. Aug 6 07:42:47.030974 tar[1575]: linux-amd64/LICENSE Aug 6 07:42:47.031496 tar[1575]: linux-amd64/README.md Aug 6 07:42:47.055194 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 6 07:42:47.268857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:42:47.272308 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 6 07:42:47.276789 systemd[1]: Startup finished in 6.831s (kernel) + 5.949s (userspace) = 12.781s. Aug 6 07:42:47.291807 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:42:47.823093 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 6 07:42:47.830183 systemd[1]: Started sshd@0-64.23.156.122:22-139.178.89.65:41728.service - OpenSSH per-connection server daemon (139.178.89.65:41728). Aug 6 07:42:47.912686 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 41728 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:47.915505 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:47.931138 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 6 07:42:47.940143 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 6 07:42:47.951901 systemd-logind[1553]: New session 1 of user core. Aug 6 07:42:47.973873 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 6 07:42:47.991146 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 6 07:42:48.002898 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.134971 kubelet[1687]: E0806 07:42:48.134822 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:42:48.138426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:42:48.141658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:42:48.164010 systemd[1703]: Queued start job for default target default.target. Aug 6 07:42:48.164630 systemd[1703]: Created slice app.slice - User Application Slice. Aug 6 07:42:48.164670 systemd[1703]: Reached target paths.target - Paths. Aug 6 07:42:48.164692 systemd[1703]: Reached target timers.target - Timers. Aug 6 07:42:48.176821 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 6 07:42:48.186780 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 6 07:42:48.186852 systemd[1703]: Reached target sockets.target - Sockets. Aug 6 07:42:48.186868 systemd[1703]: Reached target basic.target - Basic System. Aug 6 07:42:48.186932 systemd[1703]: Reached target default.target - Main User Target. Aug 6 07:42:48.186965 systemd[1703]: Startup finished in 172ms. Aug 6 07:42:48.187953 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 6 07:42:48.195800 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 6 07:42:48.261505 systemd[1]: Started sshd@1-64.23.156.122:22-139.178.89.65:41738.service - OpenSSH per-connection server daemon (139.178.89.65:41738). Aug 6 07:42:48.316697 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 41738 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:48.318563 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.326343 systemd-logind[1553]: New session 2 of user core. Aug 6 07:42:48.333268 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 6 07:42:48.399928 sshd[1718]: pam_unix(sshd:session): session closed for user core Aug 6 07:42:48.408230 systemd[1]: Started sshd@2-64.23.156.122:22-139.178.89.65:41750.service - OpenSSH per-connection server daemon (139.178.89.65:41750). Aug 6 07:42:48.409098 systemd[1]: sshd@1-64.23.156.122:22-139.178.89.65:41738.service: Deactivated successfully. Aug 6 07:42:48.414109 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Aug 6 07:42:48.414408 systemd[1]: session-2.scope: Deactivated successfully. Aug 6 07:42:48.424311 systemd-logind[1553]: Removed session 2. Aug 6 07:42:48.470378 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 41750 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:48.472366 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.480962 systemd-logind[1553]: New session 3 of user core. Aug 6 07:42:48.487310 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 6 07:42:48.549654 sshd[1723]: pam_unix(sshd:session): session closed for user core Aug 6 07:42:48.555285 systemd[1]: sshd@2-64.23.156.122:22-139.178.89.65:41750.service: Deactivated successfully. Aug 6 07:42:48.560720 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Aug 6 07:42:48.560848 systemd[1]: session-3.scope: Deactivated successfully. Aug 6 07:42:48.568603 systemd[1]: Started sshd@3-64.23.156.122:22-139.178.89.65:41752.service - OpenSSH per-connection server daemon (139.178.89.65:41752). Aug 6 07:42:48.570112 systemd-logind[1553]: Removed session 3. Aug 6 07:42:48.626046 sshd[1734]: Accepted publickey for core from 139.178.89.65 port 41752 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:48.628270 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.635993 systemd-logind[1553]: New session 4 of user core. Aug 6 07:42:48.648230 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 6 07:42:48.718870 sshd[1734]: pam_unix(sshd:session): session closed for user core Aug 6 07:42:48.729026 systemd[1]: Started sshd@4-64.23.156.122:22-139.178.89.65:41766.service - OpenSSH per-connection server daemon (139.178.89.65:41766). Aug 6 07:42:48.729793 systemd[1]: sshd@3-64.23.156.122:22-139.178.89.65:41752.service: Deactivated successfully. Aug 6 07:42:48.734031 systemd[1]: session-4.scope: Deactivated successfully. Aug 6 07:42:48.736788 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Aug 6 07:42:48.741721 systemd-logind[1553]: Removed session 4. Aug 6 07:42:48.787148 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 41766 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:48.789167 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.795827 systemd-logind[1553]: New session 5 of user core. Aug 6 07:42:48.802217 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 6 07:42:48.881794 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 6 07:42:48.882661 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:42:48.897169 sudo[1746]: pam_unix(sudo:session): session closed for user root Aug 6 07:42:48.901977 sshd[1739]: pam_unix(sshd:session): session closed for user core Aug 6 07:42:48.914247 systemd[1]: Started sshd@5-64.23.156.122:22-139.178.89.65:41782.service - OpenSSH per-connection server daemon (139.178.89.65:41782). Aug 6 07:42:48.915378 systemd[1]: sshd@4-64.23.156.122:22-139.178.89.65:41766.service: Deactivated successfully. Aug 6 07:42:48.919361 systemd[1]: session-5.scope: Deactivated successfully. Aug 6 07:42:48.922339 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Aug 6 07:42:48.924647 systemd-logind[1553]: Removed session 5. Aug 6 07:42:48.957877 sshd[1749]: Accepted publickey for core from 139.178.89.65 port 41782 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:48.960163 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:48.967917 systemd-logind[1553]: New session 6 of user core. Aug 6 07:42:48.982657 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 6 07:42:49.046298 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 6 07:42:49.046686 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:42:49.051370 sudo[1756]: pam_unix(sudo:session): session closed for user root Aug 6 07:42:49.061261 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 6 07:42:49.061697 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:42:49.079050 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 6 07:42:49.094154 auditctl[1759]: No rules Aug 6 07:42:49.094935 systemd[1]: audit-rules.service: Deactivated successfully. Aug 6 07:42:49.095211 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 6 07:42:49.102130 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:42:49.141739 augenrules[1778]: No rules Aug 6 07:42:49.143760 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:42:49.146117 sudo[1755]: pam_unix(sudo:session): session closed for user root Aug 6 07:42:49.151856 sshd[1749]: pam_unix(sshd:session): session closed for user core Aug 6 07:42:49.161210 systemd[1]: Started sshd@6-64.23.156.122:22-139.178.89.65:41794.service - OpenSSH per-connection server daemon (139.178.89.65:41794). Aug 6 07:42:49.161933 systemd[1]: sshd@5-64.23.156.122:22-139.178.89.65:41782.service: Deactivated successfully. Aug 6 07:42:49.164958 systemd[1]: session-6.scope: Deactivated successfully. Aug 6 07:42:49.166547 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Aug 6 07:42:49.171000 systemd-logind[1553]: Removed session 6. Aug 6 07:42:49.212883 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 41794 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:42:49.215463 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:42:49.225198 systemd-logind[1553]: New session 7 of user core. Aug 6 07:42:49.232284 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 6 07:42:49.292248 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 6 07:42:49.292652 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:42:49.442274 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 6 07:42:49.444514 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 6 07:42:49.836396 dockerd[1800]: time="2024-08-06T07:42:49.836298183Z" level=info msg="Starting up" Aug 6 07:42:49.941354 dockerd[1800]: time="2024-08-06T07:42:49.940764094Z" level=info msg="Loading containers: start." Aug 6 07:42:50.074629 kernel: Initializing XFRM netlink socket Aug 6 07:42:50.166747 systemd-networkd[1229]: docker0: Link UP Aug 6 07:42:50.188780 dockerd[1800]: time="2024-08-06T07:42:50.188720602Z" level=info msg="Loading containers: done." Aug 6 07:42:50.284187 dockerd[1800]: time="2024-08-06T07:42:50.283365452Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 6 07:42:50.284187 dockerd[1800]: time="2024-08-06T07:42:50.283685615Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 6 07:42:50.284187 dockerd[1800]: time="2024-08-06T07:42:50.283917005Z" level=info msg="Daemon has completed initialization" Aug 6 07:42:50.319940 dockerd[1800]: time="2024-08-06T07:42:50.319604305Z" level=info msg="API listen on /run/docker.sock" Aug 6 07:42:50.320329 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 6 07:42:51.300600 containerd[1585]: time="2024-08-06T07:42:51.300479751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 6 07:42:52.000852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386534419.mount: Deactivated successfully. Aug 6 07:42:53.515126 containerd[1585]: time="2024-08-06T07:42:53.513695758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:53.515126 containerd[1585]: time="2024-08-06T07:42:53.514614760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 6 07:42:53.515126 containerd[1585]: time="2024-08-06T07:42:53.515059111Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:53.518044 containerd[1585]: time="2024-08-06T07:42:53.517991932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:53.519713 containerd[1585]: time="2024-08-06T07:42:53.519669890Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 2.219041297s" Aug 6 07:42:53.520005 containerd[1585]: time="2024-08-06T07:42:53.519984720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 6 07:42:53.550542 containerd[1585]: time="2024-08-06T07:42:53.550482741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 6 07:42:55.273951 containerd[1585]: time="2024-08-06T07:42:55.273868040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:55.275440 containerd[1585]: time="2024-08-06T07:42:55.275365456Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 6 07:42:55.276416 containerd[1585]: time="2024-08-06T07:42:55.276349391Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:55.280627 containerd[1585]: time="2024-08-06T07:42:55.280428985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:55.282266 containerd[1585]: time="2024-08-06T07:42:55.282069833Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 1.731541558s" Aug 6 07:42:55.282266 containerd[1585]: time="2024-08-06T07:42:55.282133427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 6 07:42:55.320889 containerd[1585]: time="2024-08-06T07:42:55.320764373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 6 07:42:56.419619 containerd[1585]: time="2024-08-06T07:42:56.417887673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:56.419619 containerd[1585]: time="2024-08-06T07:42:56.419064550Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 6 07:42:56.420694 containerd[1585]: time="2024-08-06T07:42:56.420641448Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:56.423806 containerd[1585]: time="2024-08-06T07:42:56.423679680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:56.425442 containerd[1585]: time="2024-08-06T07:42:56.425388078Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.104563512s" Aug 6 07:42:56.425442 containerd[1585]: time="2024-08-06T07:42:56.425435487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 6 07:42:56.462461 containerd[1585]: time="2024-08-06T07:42:56.462403515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 6 07:42:57.621878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848838704.mount: Deactivated successfully. Aug 6 07:42:58.161644 containerd[1585]: time="2024-08-06T07:42:58.160630539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.162510 containerd[1585]: time="2024-08-06T07:42:58.162443749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 6 07:42:58.163101 containerd[1585]: time="2024-08-06T07:42:58.163050807Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.165918 containerd[1585]: time="2024-08-06T07:42:58.165865103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.167193 containerd[1585]: time="2024-08-06T07:42:58.167141629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 1.70438324s" Aug 6 07:42:58.167382 containerd[1585]: time="2024-08-06T07:42:58.167359942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 6 07:42:58.199003 containerd[1585]: time="2024-08-06T07:42:58.198952456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 6 07:42:58.210370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 6 07:42:58.218946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:42:58.389914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:42:58.403201 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:42:58.490839 kubelet[2035]: E0806 07:42:58.490717 2035 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:42:58.495868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:42:58.496118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:42:58.827310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846592781.mount: Deactivated successfully. Aug 6 07:42:58.833627 containerd[1585]: time="2024-08-06T07:42:58.833552591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.834909 containerd[1585]: time="2024-08-06T07:42:58.834832436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 6 07:42:58.835845 containerd[1585]: time="2024-08-06T07:42:58.835726856Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.838159 containerd[1585]: time="2024-08-06T07:42:58.838101353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:42:58.839826 containerd[1585]: time="2024-08-06T07:42:58.839535353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 640.199587ms" Aug 6 07:42:58.839826 containerd[1585]: time="2024-08-06T07:42:58.839598558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 6 07:42:58.869504 containerd[1585]: time="2024-08-06T07:42:58.869444581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 6 07:42:59.490244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181087551.mount: Deactivated successfully. Aug 6 07:43:01.638633 containerd[1585]: time="2024-08-06T07:43:01.637270195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:01.640093 containerd[1585]: time="2024-08-06T07:43:01.640008472Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 6 07:43:01.641065 containerd[1585]: time="2024-08-06T07:43:01.641016444Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:01.646360 containerd[1585]: time="2024-08-06T07:43:01.646292612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:01.648646 containerd[1585]: time="2024-08-06T07:43:01.648552904Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.779047721s" Aug 6 07:43:01.648999 containerd[1585]: time="2024-08-06T07:43:01.648837751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 6 07:43:01.699628 containerd[1585]: time="2024-08-06T07:43:01.699232745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 6 07:43:02.368719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616708050.mount: Deactivated successfully. Aug 6 07:43:03.004682 containerd[1585]: time="2024-08-06T07:43:03.004623217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:03.005829 containerd[1585]: time="2024-08-06T07:43:03.005744765Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 6 07:43:03.006615 containerd[1585]: time="2024-08-06T07:43:03.006435351Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:03.010352 containerd[1585]: time="2024-08-06T07:43:03.010266342Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.310967179s" Aug 6 07:43:03.010352 containerd[1585]: time="2024-08-06T07:43:03.010337751Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 6 07:43:03.010622 containerd[1585]: time="2024-08-06T07:43:03.010566921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:06.119382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:43:06.128074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:43:06.164364 systemd[1]: Reloading requested from client PID 2177 ('systemctl') (unit session-7.scope)... Aug 6 07:43:06.164380 systemd[1]: Reloading... Aug 6 07:43:06.291102 zram_generator::config[2215]: No configuration found. Aug 6 07:43:06.467694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:43:06.549180 systemd[1]: Reloading finished in 383 ms. Aug 6 07:43:06.630302 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 6 07:43:06.630461 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 6 07:43:06.630966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:43:06.638158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:43:06.820048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:43:06.825875 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:43:06.899353 kubelet[2280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:43:06.899353 kubelet[2280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:43:06.899353 kubelet[2280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:43:06.900021 kubelet[2280]: I0806 07:43:06.899390 2280 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:43:07.417544 kubelet[2280]: I0806 07:43:07.417496 2280 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:43:07.418037 kubelet[2280]: I0806 07:43:07.417816 2280 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:43:07.418357 kubelet[2280]: I0806 07:43:07.418262 2280 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:43:07.441371 kubelet[2280]: I0806 07:43:07.441304 2280 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:43:07.442684 kubelet[2280]: E0806 07:43:07.441631 2280 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.156.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.460987 kubelet[2280]: I0806 07:43:07.460940 2280 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:43:07.464676 kubelet[2280]: I0806 07:43:07.464566 2280 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:43:07.464871 kubelet[2280]: I0806 07:43:07.464847 2280 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:43:07.465327 kubelet[2280]: I0806 07:43:07.465285 2280 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:43:07.465327 kubelet[2280]: I0806 07:43:07.465326 2280 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:43:07.466077 kubelet[2280]: I0806 07:43:07.466025 2280 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:43:07.468075 kubelet[2280]: I0806 07:43:07.467740 2280 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:43:07.468075 kubelet[2280]: I0806 07:43:07.467848 2280 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:43:07.468075 kubelet[2280]: I0806 07:43:07.467903 2280 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:43:07.468075 kubelet[2280]: I0806 07:43:07.467927 2280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:43:07.471235 kubelet[2280]: W0806 07:43:07.470547 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.156.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-5-8b675ffd7f&limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.471235 kubelet[2280]: E0806 07:43:07.470696 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.156.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-5-8b675ffd7f&limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.471235 kubelet[2280]: I0806 07:43:07.470815 2280 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 07:43:07.474442 kubelet[2280]: W0806 07:43:07.474407 2280 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 6 07:43:07.478162 kubelet[2280]: W0806 07:43:07.478083 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.156.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.478162 kubelet[2280]: E0806 07:43:07.478166 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.156.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.478661 kubelet[2280]: I0806 07:43:07.478627 2280 server.go:1232] "Started kubelet" Aug 6 07:43:07.479101 kubelet[2280]: I0806 07:43:07.479081 2280 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:43:07.481269 kubelet[2280]: I0806 07:43:07.481234 2280 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:43:07.483623 kubelet[2280]: I0806 07:43:07.482330 2280 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:43:07.483623 kubelet[2280]: I0806 07:43:07.482768 2280 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:43:07.483623 kubelet[2280]: E0806 07:43:07.483050 2280 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.1.0-5-8b675ffd7f.17e913e085757c4c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.1.0-5-8b675ffd7f", UID:"ci-4012.1.0-5-8b675ffd7f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.1.0-5-8b675ffd7f"}, FirstTimestamp:time.Date(2024, time.August, 6, 7, 43, 7, 478596684, time.Local), LastTimestamp:time.Date(2024, time.August, 6, 7, 43, 7, 478596684, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.1.0-5-8b675ffd7f"}': 'Post "https://64.23.156.122:6443/api/v1/namespaces/default/events": dial tcp 64.23.156.122:6443: connect: connection refused'(may retry after sleeping) Aug 6 07:43:07.486893 kubelet[2280]: I0806 07:43:07.486855 2280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:43:07.488602 kubelet[2280]: E0806 07:43:07.488550 2280 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:43:07.488602 kubelet[2280]: E0806 07:43:07.488607 2280 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:43:07.490029 kubelet[2280]: I0806 07:43:07.489996 2280 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:43:07.492410 kubelet[2280]: E0806 07:43:07.492377 2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.156.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-5-8b675ffd7f?timeout=10s\": dial tcp 64.23.156.122:6443: connect: connection refused" interval="200ms" Aug 6 07:43:07.499105 kubelet[2280]: I0806 07:43:07.498839 2280 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:43:07.499105 kubelet[2280]: I0806 07:43:07.498948 2280 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:43:07.509381 kubelet[2280]: W0806 07:43:07.509272 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.156.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.509381 kubelet[2280]: E0806 07:43:07.509338 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.156.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.521911 kubelet[2280]: I0806 07:43:07.521757 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:43:07.524356 kubelet[2280]: I0806 07:43:07.523874 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:43:07.524356 kubelet[2280]: I0806 07:43:07.523908 2280 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:43:07.524356 kubelet[2280]: I0806 07:43:07.523932 2280 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:43:07.524356 kubelet[2280]: E0806 07:43:07.524035 2280 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:43:07.531249 kubelet[2280]: W0806 07:43:07.531192 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.156.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.531558 kubelet[2280]: E0806 07:43:07.531457 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.156.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:07.558195 kubelet[2280]: I0806 07:43:07.557835 2280 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:43:07.558195 kubelet[2280]: I0806 07:43:07.557858 2280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:43:07.558195 kubelet[2280]: I0806 07:43:07.557884 2280 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:43:07.560638 kubelet[2280]: I0806 07:43:07.560568 2280 policy_none.go:49] "None policy: Start" Aug 6 07:43:07.561682 kubelet[2280]: I0806 07:43:07.561657 2280 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:43:07.562397 kubelet[2280]: I0806 07:43:07.561945 2280 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:43:07.572988 kubelet[2280]: I0806 07:43:07.572935 2280 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:43:07.577357 kubelet[2280]: I0806 07:43:07.577309 2280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:43:07.578968 kubelet[2280]: E0806 07:43:07.578910 2280 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.1.0-5-8b675ffd7f\" not found" Aug 6 07:43:07.591663 kubelet[2280]: I0806 07:43:07.591630 2280 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.592311 kubelet[2280]: E0806 07:43:07.592274 2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.156.122:6443/api/v1/nodes\": dial tcp 64.23.156.122:6443: connect: connection refused" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.625017 kubelet[2280]: I0806 07:43:07.624957 2280 topology_manager.go:215] "Topology Admit Handler" podUID="5817888588db969fe4b5249727c36bd4" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.626797 kubelet[2280]: I0806 07:43:07.626388 2280 topology_manager.go:215] "Topology Admit Handler" podUID="3823f39468d291db3c4cb589250e6b8b" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.629824 kubelet[2280]: I0806 07:43:07.629782 2280 topology_manager.go:215] "Topology Admit Handler" podUID="5ab1ec84ce21a63b57888d357554909e" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.694904 kubelet[2280]: E0806 07:43:07.694006 2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.156.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-5-8b675ffd7f?timeout=10s\": dial tcp 64.23.156.122:6443: connect: connection refused" interval="400ms" Aug 6 07:43:07.700568 kubelet[2280]: I0806 07:43:07.700461 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700568 kubelet[2280]: I0806 07:43:07.700534 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700833 kubelet[2280]: I0806 07:43:07.700677 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700833 kubelet[2280]: I0806 07:43:07.700764 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700833 kubelet[2280]: I0806 07:43:07.700798 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ab1ec84ce21a63b57888d357554909e-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5ab1ec84ce21a63b57888d357554909e\") " pod="kube-system/kube-scheduler-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700981 kubelet[2280]: I0806 07:43:07.700875 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700981 kubelet[2280]: I0806 07:43:07.700910 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.700981 kubelet[2280]: I0806 07:43:07.700940 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.701126 kubelet[2280]: I0806 07:43:07.700984 2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.794040 kubelet[2280]: I0806 07:43:07.793965 2280 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.794660 kubelet[2280]: E0806 07:43:07.794614 2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.156.122:6443/api/v1/nodes\": dial tcp 64.23.156.122:6443: connect: connection refused" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:07.814031 systemd[1]: Started sshd@7-64.23.156.122:22-175.125.95.234:59196.service - OpenSSH per-connection server daemon (175.125.95.234:59196). Aug 6 07:43:07.932999 kubelet[2280]: E0806 07:43:07.932943 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:07.933754 containerd[1585]: time="2024-08-06T07:43:07.933707931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-5-8b675ffd7f,Uid:5817888588db969fe4b5249727c36bd4,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:07.939430 kubelet[2280]: E0806 07:43:07.939344 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:07.942979 kubelet[2280]: E0806 07:43:07.942929 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:07.945567 containerd[1585]: time="2024-08-06T07:43:07.944819921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-5-8b675ffd7f,Uid:3823f39468d291db3c4cb589250e6b8b,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:07.945567 containerd[1585]: time="2024-08-06T07:43:07.945173069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-5-8b675ffd7f,Uid:5ab1ec84ce21a63b57888d357554909e,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:08.095312 kubelet[2280]: E0806 07:43:08.095237 2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.156.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-5-8b675ffd7f?timeout=10s\": dial tcp 64.23.156.122:6443: connect: connection refused" interval="800ms" Aug 6 07:43:08.196856 kubelet[2280]: I0806 07:43:08.196297 2280 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:08.196856 kubelet[2280]: E0806 07:43:08.196778 2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.156.122:6443/api/v1/nodes\": dial tcp 64.23.156.122:6443: connect: connection refused" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:08.369225 kubelet[2280]: W0806 07:43:08.369126 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.156.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:08.369225 kubelet[2280]: E0806 07:43:08.369218 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.156.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:08.397476 sshd[2310]: Invalid user chatwithher from 175.125.95.234 port 59196 Aug 6 07:43:08.501609 kubelet[2280]: W0806 07:43:08.501442 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.156.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-5-8b675ffd7f&limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:08.501609 kubelet[2280]: E0806 07:43:08.501536 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.156.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-5-8b675ffd7f&limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:08.538874 sshd[2310]: Connection closed by invalid user chatwithher 175.125.95.234 port 59196 [preauth] Aug 6 07:43:08.543584 systemd[1]: sshd@7-64.23.156.122:22-175.125.95.234:59196.service: Deactivated successfully. Aug 6 07:43:08.556783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572644044.mount: Deactivated successfully. Aug 6 07:43:08.559400 containerd[1585]: time="2024-08-06T07:43:08.559214937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:43:08.561216 containerd[1585]: time="2024-08-06T07:43:08.561159119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:43:08.562842 containerd[1585]: time="2024-08-06T07:43:08.562790753Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:43:08.564271 containerd[1585]: time="2024-08-06T07:43:08.563753263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:43:08.564637 containerd[1585]: time="2024-08-06T07:43:08.564522368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 6 07:43:08.564737 containerd[1585]: time="2024-08-06T07:43:08.564690273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:43:08.566601 containerd[1585]: time="2024-08-06T07:43:08.565054380Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:43:08.571540 containerd[1585]: time="2024-08-06T07:43:08.571466611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:43:08.572872 containerd[1585]: time="2024-08-06T07:43:08.572820460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.855875ms" Aug 6 07:43:08.575913 containerd[1585]: time="2024-08-06T07:43:08.575499750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.682587ms" Aug 6 07:43:08.579137 containerd[1585]: time="2024-08-06T07:43:08.579084941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.81419ms" Aug 6 07:43:08.729936 containerd[1585]: time="2024-08-06T07:43:08.729805993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:08.730280 containerd[1585]: time="2024-08-06T07:43:08.730234475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.730444 containerd[1585]: time="2024-08-06T07:43:08.730398027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:08.730660 containerd[1585]: time="2024-08-06T07:43:08.730617604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.732547 containerd[1585]: time="2024-08-06T07:43:08.732163676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:08.732547 containerd[1585]: time="2024-08-06T07:43:08.732252566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.732547 containerd[1585]: time="2024-08-06T07:43:08.732288159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:08.732547 containerd[1585]: time="2024-08-06T07:43:08.732303227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.735000 containerd[1585]: time="2024-08-06T07:43:08.734878942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:08.735935 containerd[1585]: time="2024-08-06T07:43:08.734960900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.736063 containerd[1585]: time="2024-08-06T07:43:08.735732228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:08.736063 containerd[1585]: time="2024-08-06T07:43:08.735870406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:08.894362 containerd[1585]: time="2024-08-06T07:43:08.894106219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-5-8b675ffd7f,Uid:5817888588db969fe4b5249727c36bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e696a1afa515dbbef2e96439b6b0dba5576261ac1a3307ed146d4deb910029f8\"" Aug 6 07:43:08.899159 kubelet[2280]: E0806 07:43:08.897596 2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.156.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-5-8b675ffd7f?timeout=10s\": dial tcp 64.23.156.122:6443: connect: connection refused" interval="1.6s" Aug 6 07:43:08.901630 kubelet[2280]: E0806 07:43:08.900790 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:08.905625 containerd[1585]: time="2024-08-06T07:43:08.904368303Z" level=info msg="CreateContainer within sandbox \"e696a1afa515dbbef2e96439b6b0dba5576261ac1a3307ed146d4deb910029f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 6 07:43:08.905625 containerd[1585]: time="2024-08-06T07:43:08.904572767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-5-8b675ffd7f,Uid:3823f39468d291db3c4cb589250e6b8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"26109ef2b79fc8c1779d3dc26cb14ae6d4678a90fb9df117ece6621742cf649b\"" Aug 6 07:43:08.910785 kubelet[2280]: E0806 07:43:08.910727 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:08.917869 containerd[1585]: time="2024-08-06T07:43:08.917774849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-5-8b675ffd7f,Uid:5ab1ec84ce21a63b57888d357554909e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7543acc3f956ad00b2f88f660455aac67f041f3f5ea95594d3527c7f2dd0d999\"" Aug 6 07:43:08.922399 containerd[1585]: time="2024-08-06T07:43:08.922352048Z" level=info msg="CreateContainer within sandbox \"26109ef2b79fc8c1779d3dc26cb14ae6d4678a90fb9df117ece6621742cf649b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 6 07:43:08.923620 kubelet[2280]: E0806 07:43:08.923392 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:08.928839 containerd[1585]: time="2024-08-06T07:43:08.928788953Z" level=info msg="CreateContainer within sandbox \"7543acc3f956ad00b2f88f660455aac67f041f3f5ea95594d3527c7f2dd0d999\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 6 07:43:08.943086 containerd[1585]: time="2024-08-06T07:43:08.943022672Z" level=info msg="CreateContainer within sandbox \"e696a1afa515dbbef2e96439b6b0dba5576261ac1a3307ed146d4deb910029f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c543b921aa0400cbb5c80ae415028f38f4274871b3fe6c96bf68161c901e297\"" Aug 6 07:43:08.944204 containerd[1585]: time="2024-08-06T07:43:08.944164931Z" level=info msg="StartContainer for \"5c543b921aa0400cbb5c80ae415028f38f4274871b3fe6c96bf68161c901e297\"" Aug 6 07:43:08.946132 containerd[1585]: time="2024-08-06T07:43:08.946083452Z" level=info msg="CreateContainer within sandbox \"7543acc3f956ad00b2f88f660455aac67f041f3f5ea95594d3527c7f2dd0d999\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91e04811a86019bb7423f72d18cac818340963116176d0d092ac2ac43ef684c1\"" Aug 6 07:43:08.946910 containerd[1585]: time="2024-08-06T07:43:08.946880556Z" level=info msg="CreateContainer within sandbox \"26109ef2b79fc8c1779d3dc26cb14ae6d4678a90fb9df117ece6621742cf649b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42586bdf2cc714d39d1d44913ea25fb9554350f141b81d1532fab398d016f48a\"" Aug 6 07:43:08.947082 containerd[1585]: time="2024-08-06T07:43:08.947055844Z" level=info msg="StartContainer for \"91e04811a86019bb7423f72d18cac818340963116176d0d092ac2ac43ef684c1\"" Aug 6 07:43:08.949602 containerd[1585]: time="2024-08-06T07:43:08.949054906Z" level=info msg="StartContainer for \"42586bdf2cc714d39d1d44913ea25fb9554350f141b81d1532fab398d016f48a\"" Aug 6 07:43:09.001403 kubelet[2280]: I0806 07:43:09.001356 2280 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:09.003371 kubelet[2280]: E0806 07:43:09.003318 2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.156.122:6443/api/v1/nodes\": dial tcp 64.23.156.122:6443: connect: connection refused" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:09.050723 kubelet[2280]: W0806 07:43:09.050485 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.156.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:09.050723 kubelet[2280]: E0806 07:43:09.050555 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.156.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:09.059876 kubelet[2280]: W0806 07:43:09.058191 2280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.156.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:09.059876 kubelet[2280]: E0806 07:43:09.059840 2280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.156.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.156.122:6443: connect: connection refused Aug 6 07:43:09.079182 containerd[1585]: time="2024-08-06T07:43:09.079117060Z" level=info msg="StartContainer for \"5c543b921aa0400cbb5c80ae415028f38f4274871b3fe6c96bf68161c901e297\" returns successfully" Aug 6 07:43:09.141701 containerd[1585]: time="2024-08-06T07:43:09.141634948Z" level=info msg="StartContainer for \"91e04811a86019bb7423f72d18cac818340963116176d0d092ac2ac43ef684c1\" returns successfully" Aug 6 07:43:09.156221 containerd[1585]: time="2024-08-06T07:43:09.155010866Z" level=info msg="StartContainer for \"42586bdf2cc714d39d1d44913ea25fb9554350f141b81d1532fab398d016f48a\" returns successfully" Aug 6 07:43:09.560437 kubelet[2280]: E0806 07:43:09.558441 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:09.570898 kubelet[2280]: E0806 07:43:09.568479 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:09.584621 kubelet[2280]: E0806 07:43:09.582450 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:10.585624 kubelet[2280]: E0806 07:43:10.584497 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:10.608011 kubelet[2280]: I0806 07:43:10.605940 2280 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:11.183621 kubelet[2280]: E0806 07:43:11.182559 2280 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:11.572602 kubelet[2280]: E0806 07:43:11.571228 2280 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.1.0-5-8b675ffd7f\" not found" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:11.633993 kubelet[2280]: I0806 07:43:11.633855 2280 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:11.653614 kubelet[2280]: E0806 07:43:11.652492 2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-5-8b675ffd7f\" not found" Aug 6 07:43:12.478959 kubelet[2280]: I0806 07:43:12.477978 2280 apiserver.go:52] "Watching apiserver" Aug 6 07:43:12.499777 kubelet[2280]: I0806 07:43:12.499724 2280 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 07:43:14.533211 systemd[1]: Reloading requested from client PID 2560 ('systemctl') (unit session-7.scope)... Aug 6 07:43:14.533231 systemd[1]: Reloading... Aug 6 07:43:14.648617 zram_generator::config[2600]: No configuration found. Aug 6 07:43:14.808767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:43:14.915506 systemd[1]: Reloading finished in 381 ms. Aug 6 07:43:14.954378 kubelet[2280]: I0806 07:43:14.954318 2280 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:43:14.954666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:43:14.967316 systemd[1]: kubelet.service: Deactivated successfully. Aug 6 07:43:14.967952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:43:14.977053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:43:15.116977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:43:15.127278 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:43:15.218642 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:43:15.218642 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:43:15.218642 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:43:15.218642 kubelet[2658]: I0806 07:43:15.218032 2658 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:43:15.236960 kubelet[2658]: I0806 07:43:15.236911 2658 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:43:15.236960 kubelet[2658]: I0806 07:43:15.236956 2658 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:43:15.239064 kubelet[2658]: I0806 07:43:15.237387 2658 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:43:15.240153 kubelet[2658]: I0806 07:43:15.240110 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 6 07:43:15.242664 kubelet[2658]: I0806 07:43:15.241991 2658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:43:15.252173 kubelet[2658]: I0806 07:43:15.252129 2658 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.252672 2658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.252876 2658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.252907 2658 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.252922 2658 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.252979 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:43:15.253369 kubelet[2658]: I0806 07:43:15.253120 2658 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:43:15.256896 kubelet[2658]: I0806 07:43:15.253137 2658 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:43:15.256896 kubelet[2658]: I0806 07:43:15.254296 2658 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:43:15.256896 kubelet[2658]: I0806 07:43:15.254318 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:43:15.259616 kubelet[2658]: I0806 07:43:15.258264 2658 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 07:43:15.259616 kubelet[2658]: I0806 07:43:15.258811 2658 server.go:1232] "Started kubelet" Aug 6 07:43:15.263623 kubelet[2658]: I0806 07:43:15.263076 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:43:15.265225 kubelet[2658]: E0806 07:43:15.265191 2658 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:43:15.265225 kubelet[2658]: E0806 07:43:15.265223 2658 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:43:15.271690 kubelet[2658]: I0806 07:43:15.269066 2658 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:43:15.274632 kubelet[2658]: I0806 07:43:15.273686 2658 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:43:15.277175 kubelet[2658]: I0806 07:43:15.276622 2658 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:43:15.293369 kubelet[2658]: I0806 07:43:15.290892 2658 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:43:15.302399 kubelet[2658]: I0806 07:43:15.302347 2658 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:43:15.302574 kubelet[2658]: I0806 07:43:15.302533 2658 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:43:15.303689 kubelet[2658]: I0806 07:43:15.303105 2658 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:43:15.306557 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 6 07:43:15.306962 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 6 07:43:15.349522 kubelet[2658]: I0806 07:43:15.349486 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:43:15.356694 kubelet[2658]: I0806 07:43:15.356299 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:43:15.356694 kubelet[2658]: I0806 07:43:15.356341 2658 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:43:15.356694 kubelet[2658]: I0806 07:43:15.356370 2658 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:43:15.356694 kubelet[2658]: E0806 07:43:15.356481 2658 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:43:15.399082 kubelet[2658]: I0806 07:43:15.392631 2658 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.413087 kubelet[2658]: I0806 07:43:15.410562 2658 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.413087 kubelet[2658]: I0806 07:43:15.410732 2658 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.457337 kubelet[2658]: E0806 07:43:15.456873 2658 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.526762 2658 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.526803 2658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.526836 2658 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.527086 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.527119 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 6 07:43:15.527622 kubelet[2658]: I0806 07:43:15.527131 2658 policy_none.go:49] "None policy: Start" Aug 6 07:43:15.531615 kubelet[2658]: I0806 07:43:15.530573 2658 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:43:15.531615 kubelet[2658]: I0806 07:43:15.530828 2658 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:43:15.531615 kubelet[2658]: I0806 07:43:15.531410 2658 state_mem.go:75] "Updated machine memory state" Aug 6 07:43:15.537536 kubelet[2658]: I0806 07:43:15.536941 2658 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:43:15.544215 kubelet[2658]: I0806 07:43:15.542094 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:43:15.658929 kubelet[2658]: I0806 07:43:15.658044 2658 topology_manager.go:215] "Topology Admit Handler" podUID="5ab1ec84ce21a63b57888d357554909e" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.658929 kubelet[2658]: I0806 07:43:15.658240 2658 topology_manager.go:215] "Topology Admit Handler" podUID="5817888588db969fe4b5249727c36bd4" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.658929 kubelet[2658]: I0806 07:43:15.658312 2658 topology_manager.go:215] "Topology Admit Handler" podUID="3823f39468d291db3c4cb589250e6b8b" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.690082 kubelet[2658]: W0806 07:43:15.689751 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:43:15.690677 kubelet[2658]: W0806 07:43:15.689771 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:43:15.691138 kubelet[2658]: W0806 07:43:15.689869 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:43:15.706038 kubelet[2658]: I0806 07:43:15.705349 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706038 kubelet[2658]: I0806 07:43:15.705492 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706038 kubelet[2658]: I0806 07:43:15.705529 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706038 kubelet[2658]: I0806 07:43:15.705575 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706038 kubelet[2658]: I0806 07:43:15.705638 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706542 kubelet[2658]: I0806 07:43:15.705688 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706542 kubelet[2658]: I0806 07:43:15.705715 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3823f39468d291db3c4cb589250e6b8b-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-5-8b675ffd7f\" (UID: \"3823f39468d291db3c4cb589250e6b8b\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706542 kubelet[2658]: I0806 07:43:15.705749 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ab1ec84ce21a63b57888d357554909e-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5ab1ec84ce21a63b57888d357554909e\") " pod="kube-system/kube-scheduler-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.706542 kubelet[2658]: I0806 07:43:15.705781 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5817888588db969fe4b5249727c36bd4-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" (UID: \"5817888588db969fe4b5249727c36bd4\") " pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:15.994181 kubelet[2658]: E0806 07:43:15.994146 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:15.994759 kubelet[2658]: E0806 07:43:15.994470 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:15.994759 kubelet[2658]: E0806 07:43:15.994202 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:16.135976 sudo[2673]: pam_unix(sudo:session): session closed for user root Aug 6 07:43:16.260693 kubelet[2658]: I0806 07:43:16.259025 2658 apiserver.go:52] "Watching apiserver" Aug 6 07:43:16.303597 kubelet[2658]: I0806 07:43:16.303491 2658 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 07:43:16.394778 kubelet[2658]: E0806 07:43:16.394737 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:16.411605 kubelet[2658]: W0806 07:43:16.410203 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:43:16.411605 kubelet[2658]: E0806 07:43:16.410291 2658 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.1.0-5-8b675ffd7f\" already exists" pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:16.413377 kubelet[2658]: W0806 07:43:16.411918 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:43:16.413377 kubelet[2658]: E0806 07:43:16.411980 2658 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012.1.0-5-8b675ffd7f\" already exists" pod="kube-system/kube-scheduler-ci-4012.1.0-5-8b675ffd7f" Aug 6 07:43:16.413377 kubelet[2658]: E0806 07:43:16.412256 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:16.414920 kubelet[2658]: E0806 07:43:16.414899 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:16.451245 kubelet[2658]: I0806 07:43:16.450792 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.1.0-5-8b675ffd7f" podStartSLOduration=1.450746677 podCreationTimestamp="2024-08-06 07:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:16.438148079 +0000 UTC m=+1.302451897" watchObservedRunningTime="2024-08-06 07:43:16.450746677 +0000 UTC m=+1.315050382" Aug 6 07:43:16.466556 kubelet[2658]: I0806 07:43:16.466336 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.1.0-5-8b675ffd7f" podStartSLOduration=1.466259459 podCreationTimestamp="2024-08-06 07:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:16.451904986 +0000 UTC m=+1.316208693" watchObservedRunningTime="2024-08-06 07:43:16.466259459 +0000 UTC m=+1.330563175" Aug 6 07:43:17.395570 kubelet[2658]: E0806 07:43:17.393567 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:17.396239 kubelet[2658]: E0806 07:43:17.394215 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:17.900564 sudo[1791]: pam_unix(sudo:session): session closed for user root Aug 6 07:43:17.905127 sshd[1784]: pam_unix(sshd:session): session closed for user core Aug 6 07:43:17.910403 systemd[1]: sshd@6-64.23.156.122:22-139.178.89.65:41794.service: Deactivated successfully. Aug 6 07:43:17.917910 systemd[1]: session-7.scope: Deactivated successfully. Aug 6 07:43:17.919662 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Aug 6 07:43:17.921241 systemd-logind[1553]: Removed session 7. Aug 6 07:43:23.344009 kubelet[2658]: E0806 07:43:23.342341 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:23.359313 kubelet[2658]: I0806 07:43:23.359096 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.1.0-5-8b675ffd7f" podStartSLOduration=8.359012034 podCreationTimestamp="2024-08-06 07:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:16.465916329 +0000 UTC m=+1.330220034" watchObservedRunningTime="2024-08-06 07:43:23.359012034 +0000 UTC m=+8.223315740" Aug 6 07:43:23.405990 kubelet[2658]: E0806 07:43:23.405644 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:24.285622 kubelet[2658]: E0806 07:43:24.285152 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:24.407296 kubelet[2658]: E0806 07:43:24.407196 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:26.158097 kubelet[2658]: E0806 07:43:26.158030 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:26.412274 kubelet[2658]: E0806 07:43:26.411571 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:26.974380 kubelet[2658]: I0806 07:43:26.974325 2658 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 6 07:43:26.975053 containerd[1585]: time="2024-08-06T07:43:26.974973319Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 6 07:43:26.976279 kubelet[2658]: I0806 07:43:26.975208 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 6 07:43:27.865775 kubelet[2658]: I0806 07:43:27.865557 2658 topology_manager.go:215] "Topology Admit Handler" podUID="a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-jl9gs" Aug 6 07:43:27.946940 kubelet[2658]: I0806 07:43:27.946884 2658 topology_manager.go:215] "Topology Admit Handler" podUID="1938d3d5-17d8-4123-8001-e72129229d75" podNamespace="kube-system" podName="kube-proxy-pv96p" Aug 6 07:43:27.978808 kubelet[2658]: I0806 07:43:27.978768 2658 topology_manager.go:215] "Topology Admit Handler" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" podNamespace="kube-system" podName="cilium-r9ptb" Aug 6 07:43:27.987754 kubelet[2658]: I0806 07:43:27.987348 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-jl9gs\" (UID: \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\") " pod="kube-system/cilium-operator-6bc8ccdb58-jl9gs" Aug 6 07:43:27.987754 kubelet[2658]: I0806 07:43:27.987420 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8wz\" (UniqueName: \"kubernetes.io/projected/1938d3d5-17d8-4123-8001-e72129229d75-kube-api-access-2d8wz\") pod \"kube-proxy-pv96p\" (UID: \"1938d3d5-17d8-4123-8001-e72129229d75\") " pod="kube-system/kube-proxy-pv96p" Aug 6 07:43:27.987754 kubelet[2658]: I0806 07:43:27.987455 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-run\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.987754 kubelet[2658]: I0806 07:43:27.987593 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxjj\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-kube-api-access-bmxjj\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.987754 kubelet[2658]: I0806 07:43:27.987709 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-net\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989348 kubelet[2658]: I0806 07:43:27.988950 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ktq2\" (UniqueName: \"kubernetes.io/projected/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-kube-api-access-9ktq2\") pod \"cilium-operator-6bc8ccdb58-jl9gs\" (UID: \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\") " pod="kube-system/cilium-operator-6bc8ccdb58-jl9gs" Aug 6 07:43:27.989348 kubelet[2658]: I0806 07:43:27.989020 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1938d3d5-17d8-4123-8001-e72129229d75-kube-proxy\") pod \"kube-proxy-pv96p\" (UID: \"1938d3d5-17d8-4123-8001-e72129229d75\") " pod="kube-system/kube-proxy-pv96p" Aug 6 07:43:27.989348 kubelet[2658]: I0806 07:43:27.989056 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-hostproc\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989348 kubelet[2658]: I0806 07:43:27.989091 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-etc-cni-netd\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989348 kubelet[2658]: I0806 07:43:27.989121 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-hubble-tls\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989701 kubelet[2658]: I0806 07:43:27.989155 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-cgroup\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989701 kubelet[2658]: I0806 07:43:27.989187 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cni-path\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989701 kubelet[2658]: I0806 07:43:27.989218 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-xtables-lock\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.989701 kubelet[2658]: I0806 07:43:27.989249 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-config-path\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990454 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-kernel\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990515 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1938d3d5-17d8-4123-8001-e72129229d75-xtables-lock\") pod \"kube-proxy-pv96p\" (UID: \"1938d3d5-17d8-4123-8001-e72129229d75\") " pod="kube-system/kube-proxy-pv96p" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990535 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1938d3d5-17d8-4123-8001-e72129229d75-lib-modules\") pod \"kube-proxy-pv96p\" (UID: \"1938d3d5-17d8-4123-8001-e72129229d75\") " pod="kube-system/kube-proxy-pv96p" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990568 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-lib-modules\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990612 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d7231de-f87e-403a-8a31-7dcef96a3150-clustermesh-secrets\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:27.991614 kubelet[2658]: I0806 07:43:27.990634 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-bpf-maps\") pod \"cilium-r9ptb\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " pod="kube-system/cilium-r9ptb" Aug 6 07:43:28.179975 kubelet[2658]: E0806 07:43:28.177560 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.180291 containerd[1585]: time="2024-08-06T07:43:28.178716970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jl9gs,Uid:a67e97f7-ae7a-4a7e-942a-6f5d7bff829f,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:28.224230 containerd[1585]: time="2024-08-06T07:43:28.221632147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:28.224230 containerd[1585]: time="2024-08-06T07:43:28.221719724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.224230 containerd[1585]: time="2024-08-06T07:43:28.221738038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:28.224230 containerd[1585]: time="2024-08-06T07:43:28.221757431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.255740 kubelet[2658]: E0806 07:43:28.255614 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.259522 containerd[1585]: time="2024-08-06T07:43:28.259446231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pv96p,Uid:1938d3d5-17d8-4123-8001-e72129229d75,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:28.293943 kubelet[2658]: E0806 07:43:28.293517 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.300390 containerd[1585]: time="2024-08-06T07:43:28.299852685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9ptb,Uid:3d7231de-f87e-403a-8a31-7dcef96a3150,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:28.313880 containerd[1585]: time="2024-08-06T07:43:28.313449320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:28.313880 containerd[1585]: time="2024-08-06T07:43:28.313519059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.313880 containerd[1585]: time="2024-08-06T07:43:28.313545296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:28.313880 containerd[1585]: time="2024-08-06T07:43:28.313558771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.337969 containerd[1585]: time="2024-08-06T07:43:28.337788696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jl9gs,Uid:a67e97f7-ae7a-4a7e-942a-6f5d7bff829f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\"" Aug 6 07:43:28.340087 kubelet[2658]: E0806 07:43:28.340028 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.345496 containerd[1585]: time="2024-08-06T07:43:28.345152270Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 6 07:43:28.356091 containerd[1585]: time="2024-08-06T07:43:28.355596119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:28.356091 containerd[1585]: time="2024-08-06T07:43:28.355743745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.356091 containerd[1585]: time="2024-08-06T07:43:28.355784290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:28.356091 containerd[1585]: time="2024-08-06T07:43:28.355804146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:28.427341 containerd[1585]: time="2024-08-06T07:43:28.427186428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pv96p,Uid:1938d3d5-17d8-4123-8001-e72129229d75,Namespace:kube-system,Attempt:0,} returns sandbox id \"1287eae13382605246fe9e04726c2434a26ee9616f4f0a1c53d8ce5bdd37f63a\"" Aug 6 07:43:28.429140 kubelet[2658]: E0806 07:43:28.428857 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.439904 containerd[1585]: time="2024-08-06T07:43:28.439644014Z" level=info msg="CreateContainer within sandbox \"1287eae13382605246fe9e04726c2434a26ee9616f4f0a1c53d8ce5bdd37f63a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 6 07:43:28.448914 containerd[1585]: time="2024-08-06T07:43:28.448721388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9ptb,Uid:3d7231de-f87e-403a-8a31-7dcef96a3150,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\"" Aug 6 07:43:28.450190 kubelet[2658]: E0806 07:43:28.450154 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:28.463913 containerd[1585]: time="2024-08-06T07:43:28.463743853Z" level=info msg="CreateContainer within sandbox \"1287eae13382605246fe9e04726c2434a26ee9616f4f0a1c53d8ce5bdd37f63a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea17081609b221d971dd129efc5c119329e88b37447074865f2aa053cf32edee\"" Aug 6 07:43:28.466652 containerd[1585]: time="2024-08-06T07:43:28.464772275Z" level=info msg="StartContainer for \"ea17081609b221d971dd129efc5c119329e88b37447074865f2aa053cf32edee\"" Aug 6 07:43:28.568027 containerd[1585]: time="2024-08-06T07:43:28.567961716Z" level=info msg="StartContainer for \"ea17081609b221d971dd129efc5c119329e88b37447074865f2aa053cf32edee\" returns successfully" Aug 6 07:43:29.427456 kubelet[2658]: E0806 07:43:29.427389 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:29.739796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424467967.mount: Deactivated successfully. Aug 6 07:43:30.506806 containerd[1585]: time="2024-08-06T07:43:30.506732012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:30.507808 containerd[1585]: time="2024-08-06T07:43:30.507573219Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907205" Aug 6 07:43:30.508554 containerd[1585]: time="2024-08-06T07:43:30.508459048Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:30.510607 update_engine[1554]: I0806 07:43:30.508991 1554 update_attempter.cc:509] Updating boot flags... Aug 6 07:43:30.512989 containerd[1585]: time="2024-08-06T07:43:30.512952193Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.167738476s" Aug 6 07:43:30.513116 containerd[1585]: time="2024-08-06T07:43:30.513101235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 6 07:43:30.535909 containerd[1585]: time="2024-08-06T07:43:30.535843557Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 6 07:43:30.544864 containerd[1585]: time="2024-08-06T07:43:30.544821877Z" level=info msg="CreateContainer within sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 6 07:43:30.567538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067515481.mount: Deactivated successfully. Aug 6 07:43:30.577861 containerd[1585]: time="2024-08-06T07:43:30.577806582Z" level=info msg="CreateContainer within sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\"" Aug 6 07:43:30.580525 containerd[1585]: time="2024-08-06T07:43:30.580482238Z" level=info msg="StartContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\"" Aug 6 07:43:30.587376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3039) Aug 6 07:43:30.720653 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3044) Aug 6 07:43:30.733158 containerd[1585]: time="2024-08-06T07:43:30.730892119Z" level=info msg="StartContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" returns successfully" Aug 6 07:43:31.447995 kubelet[2658]: E0806 07:43:31.447958 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:31.494007 kubelet[2658]: I0806 07:43:31.493517 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pv96p" podStartSLOduration=4.491980024 podCreationTimestamp="2024-08-06 07:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:29.439405253 +0000 UTC m=+14.303708957" watchObservedRunningTime="2024-08-06 07:43:31.491980024 +0000 UTC m=+16.356283719" Aug 6 07:43:31.494007 kubelet[2658]: I0806 07:43:31.493897 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-jl9gs" podStartSLOduration=2.318106192 podCreationTimestamp="2024-08-06 07:43:27 +0000 UTC" firstStartedPulling="2024-08-06 07:43:28.342423439 +0000 UTC m=+13.206727136" lastFinishedPulling="2024-08-06 07:43:30.518168309 +0000 UTC m=+15.382472004" observedRunningTime="2024-08-06 07:43:31.493601545 +0000 UTC m=+16.357905228" watchObservedRunningTime="2024-08-06 07:43:31.49385106 +0000 UTC m=+16.358154764" Aug 6 07:43:32.454055 kubelet[2658]: E0806 07:43:32.449567 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:35.511345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564776314.mount: Deactivated successfully. Aug 6 07:43:37.987649 containerd[1585]: time="2024-08-06T07:43:37.984943609Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Aug 6 07:43:37.989459 containerd[1585]: time="2024-08-06T07:43:37.981187069Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:37.991234 containerd[1585]: time="2024-08-06T07:43:37.991106772Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.454460052s" Aug 6 07:43:37.991234 containerd[1585]: time="2024-08-06T07:43:37.991150700Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 6 07:43:37.992904 containerd[1585]: time="2024-08-06T07:43:37.992539593Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:43:37.994652 containerd[1585]: time="2024-08-06T07:43:37.994535877Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 6 07:43:38.073715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988269814.mount: Deactivated successfully. Aug 6 07:43:38.075114 containerd[1585]: time="2024-08-06T07:43:38.074946259Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\"" Aug 6 07:43:38.078495 containerd[1585]: time="2024-08-06T07:43:38.078439796Z" level=info msg="StartContainer for \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\"" Aug 6 07:43:38.372620 containerd[1585]: time="2024-08-06T07:43:38.371758375Z" level=info msg="StartContainer for \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\" returns successfully" Aug 6 07:43:38.476721 kubelet[2658]: E0806 07:43:38.472909 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:38.548940 containerd[1585]: time="2024-08-06T07:43:38.516825179Z" level=info msg="shim disconnected" id=34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2 namespace=k8s.io Aug 6 07:43:38.548940 containerd[1585]: time="2024-08-06T07:43:38.548642580Z" level=warning msg="cleaning up after shim disconnected" id=34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2 namespace=k8s.io Aug 6 07:43:38.548940 containerd[1585]: time="2024-08-06T07:43:38.548675751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:43:38.572033 containerd[1585]: time="2024-08-06T07:43:38.571944266Z" level=warning msg="cleanup warnings time=\"2024-08-06T07:43:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 6 07:43:39.062684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2-rootfs.mount: Deactivated successfully. Aug 6 07:43:39.474169 kubelet[2658]: E0806 07:43:39.474129 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:39.479077 containerd[1585]: time="2024-08-06T07:43:39.478149842Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 6 07:43:39.505392 containerd[1585]: time="2024-08-06T07:43:39.501341600Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\"" Aug 6 07:43:39.504400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347175087.mount: Deactivated successfully. Aug 6 07:43:39.511154 containerd[1585]: time="2024-08-06T07:43:39.510540280Z" level=info msg="StartContainer for \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\"" Aug 6 07:43:39.615252 containerd[1585]: time="2024-08-06T07:43:39.615183407Z" level=info msg="StartContainer for \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\" returns successfully" Aug 6 07:43:39.624313 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 07:43:39.625491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:43:39.626359 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:43:39.640185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:43:39.662430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:43:39.675031 containerd[1585]: time="2024-08-06T07:43:39.674954668Z" level=info msg="shim disconnected" id=99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608 namespace=k8s.io Aug 6 07:43:39.675331 containerd[1585]: time="2024-08-06T07:43:39.675306345Z" level=warning msg="cleaning up after shim disconnected" id=99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608 namespace=k8s.io Aug 6 07:43:39.675419 containerd[1585]: time="2024-08-06T07:43:39.675406377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:43:40.063261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608-rootfs.mount: Deactivated successfully. Aug 6 07:43:40.480138 kubelet[2658]: E0806 07:43:40.479389 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:40.484158 containerd[1585]: time="2024-08-06T07:43:40.483926946Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 6 07:43:40.525840 containerd[1585]: time="2024-08-06T07:43:40.523515273Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\"" Aug 6 07:43:40.526819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001201315.mount: Deactivated successfully. Aug 6 07:43:40.528059 containerd[1585]: time="2024-08-06T07:43:40.527966249Z" level=info msg="StartContainer for \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\"" Aug 6 07:43:40.628623 containerd[1585]: time="2024-08-06T07:43:40.628531888Z" level=info msg="StartContainer for \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\" returns successfully" Aug 6 07:43:40.667486 containerd[1585]: time="2024-08-06T07:43:40.667411147Z" level=info msg="shim disconnected" id=143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba namespace=k8s.io Aug 6 07:43:40.668253 containerd[1585]: time="2024-08-06T07:43:40.667944311Z" level=warning msg="cleaning up after shim disconnected" id=143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba namespace=k8s.io Aug 6 07:43:40.668253 containerd[1585]: time="2024-08-06T07:43:40.668008546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:43:41.063455 systemd[1]: run-containerd-runc-k8s.io-143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba-runc.MVVVd3.mount: Deactivated successfully. Aug 6 07:43:41.063811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba-rootfs.mount: Deactivated successfully. Aug 6 07:43:41.507957 kubelet[2658]: E0806 07:43:41.507920 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:41.511438 containerd[1585]: time="2024-08-06T07:43:41.511380316Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 6 07:43:41.551473 containerd[1585]: time="2024-08-06T07:43:41.551337559Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\"" Aug 6 07:43:41.553940 containerd[1585]: time="2024-08-06T07:43:41.553876682Z" level=info msg="StartContainer for \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\"" Aug 6 07:43:41.628732 containerd[1585]: time="2024-08-06T07:43:41.628283181Z" level=info msg="StartContainer for \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\" returns successfully" Aug 6 07:43:41.666777 containerd[1585]: time="2024-08-06T07:43:41.666697143Z" level=info msg="shim disconnected" id=faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603 namespace=k8s.io Aug 6 07:43:41.666777 containerd[1585]: time="2024-08-06T07:43:41.666761398Z" level=warning msg="cleaning up after shim disconnected" id=faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603 namespace=k8s.io Aug 6 07:43:41.666777 containerd[1585]: time="2024-08-06T07:43:41.666771110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:43:42.062733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603-rootfs.mount: Deactivated successfully. Aug 6 07:43:42.515487 kubelet[2658]: E0806 07:43:42.515429 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:42.524862 containerd[1585]: time="2024-08-06T07:43:42.524805672Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 6 07:43:42.578488 containerd[1585]: time="2024-08-06T07:43:42.578422313Z" level=info msg="CreateContainer within sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\"" Aug 6 07:43:42.580121 containerd[1585]: time="2024-08-06T07:43:42.579824524Z" level=info msg="StartContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\"" Aug 6 07:43:42.668635 containerd[1585]: time="2024-08-06T07:43:42.668440612Z" level=info msg="StartContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" returns successfully" Aug 6 07:43:42.889271 kubelet[2658]: I0806 07:43:42.888312 2658 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 6 07:43:42.921865 kubelet[2658]: I0806 07:43:42.920915 2658 topology_manager.go:215] "Topology Admit Handler" podUID="31364e8f-9d7e-4366-9a9a-9f03af1cf701" podNamespace="kube-system" podName="coredns-5dd5756b68-hk4qj" Aug 6 07:43:42.925781 kubelet[2658]: I0806 07:43:42.925399 2658 topology_manager.go:215] "Topology Admit Handler" podUID="5979ecd6-799e-4eb5-be8e-4b0472aca8cd" podNamespace="kube-system" podName="coredns-5dd5756b68-j8dww" Aug 6 07:43:43.035420 kubelet[2658]: I0806 07:43:43.035340 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31364e8f-9d7e-4366-9a9a-9f03af1cf701-config-volume\") pod \"coredns-5dd5756b68-hk4qj\" (UID: \"31364e8f-9d7e-4366-9a9a-9f03af1cf701\") " pod="kube-system/coredns-5dd5756b68-hk4qj" Aug 6 07:43:43.035848 kubelet[2658]: I0806 07:43:43.035478 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjj66\" (UniqueName: \"kubernetes.io/projected/31364e8f-9d7e-4366-9a9a-9f03af1cf701-kube-api-access-wjj66\") pod \"coredns-5dd5756b68-hk4qj\" (UID: \"31364e8f-9d7e-4366-9a9a-9f03af1cf701\") " pod="kube-system/coredns-5dd5756b68-hk4qj" Aug 6 07:43:43.035848 kubelet[2658]: I0806 07:43:43.035550 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmpbq\" (UniqueName: \"kubernetes.io/projected/5979ecd6-799e-4eb5-be8e-4b0472aca8cd-kube-api-access-hmpbq\") pod \"coredns-5dd5756b68-j8dww\" (UID: \"5979ecd6-799e-4eb5-be8e-4b0472aca8cd\") " pod="kube-system/coredns-5dd5756b68-j8dww" Aug 6 07:43:43.035848 kubelet[2658]: I0806 07:43:43.035720 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5979ecd6-799e-4eb5-be8e-4b0472aca8cd-config-volume\") pod \"coredns-5dd5756b68-j8dww\" (UID: \"5979ecd6-799e-4eb5-be8e-4b0472aca8cd\") " pod="kube-system/coredns-5dd5756b68-j8dww" Aug 6 07:43:43.068486 systemd[1]: run-containerd-runc-k8s.io-ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178-runc.IBPJKJ.mount: Deactivated successfully. Aug 6 07:43:43.231052 kubelet[2658]: E0806 07:43:43.230710 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:43.238950 containerd[1585]: time="2024-08-06T07:43:43.238895396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hk4qj,Uid:31364e8f-9d7e-4366-9a9a-9f03af1cf701,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:43.246235 kubelet[2658]: E0806 07:43:43.246167 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:43.249372 containerd[1585]: time="2024-08-06T07:43:43.249036717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j8dww,Uid:5979ecd6-799e-4eb5-be8e-4b0472aca8cd,Namespace:kube-system,Attempt:0,}" Aug 6 07:43:43.525352 kubelet[2658]: E0806 07:43:43.525203 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:43.552197 kubelet[2658]: I0806 07:43:43.552142 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r9ptb" podStartSLOduration=7.013526145 podCreationTimestamp="2024-08-06 07:43:27 +0000 UTC" firstStartedPulling="2024-08-06 07:43:28.453333481 +0000 UTC m=+13.317637178" lastFinishedPulling="2024-08-06 07:43:37.991890567 +0000 UTC m=+22.856194263" observedRunningTime="2024-08-06 07:43:43.550655529 +0000 UTC m=+28.414959233" watchObservedRunningTime="2024-08-06 07:43:43.55208323 +0000 UTC m=+28.416386935" Aug 6 07:43:44.530330 kubelet[2658]: E0806 07:43:44.530209 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:44.966639 systemd-networkd[1229]: cilium_host: Link UP Aug 6 07:43:44.966802 systemd-networkd[1229]: cilium_net: Link UP Aug 6 07:43:44.966806 systemd-networkd[1229]: cilium_net: Gained carrier Aug 6 07:43:44.967008 systemd-networkd[1229]: cilium_host: Gained carrier Aug 6 07:43:44.968789 systemd-networkd[1229]: cilium_host: Gained IPv6LL Aug 6 07:43:45.126796 systemd-networkd[1229]: cilium_vxlan: Link UP Aug 6 07:43:45.126806 systemd-networkd[1229]: cilium_vxlan: Gained carrier Aug 6 07:43:45.455853 kernel: NET: Registered PF_ALG protocol family Aug 6 07:43:45.533147 kubelet[2658]: E0806 07:43:45.532854 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:45.853188 systemd-networkd[1229]: cilium_net: Gained IPv6LL Aug 6 07:43:46.428173 systemd-networkd[1229]: lxc_health: Link UP Aug 6 07:43:46.443818 systemd-networkd[1229]: lxc_health: Gained carrier Aug 6 07:43:46.841233 systemd-networkd[1229]: lxcae472c056e91: Link UP Aug 6 07:43:46.852980 kernel: eth0: renamed from tmpe98b7 Aug 6 07:43:46.866490 systemd-networkd[1229]: lxcd255a53dbb4d: Link UP Aug 6 07:43:46.878627 kernel: eth0: renamed from tmp0d35d Aug 6 07:43:46.879446 systemd-networkd[1229]: lxcae472c056e91: Gained carrier Aug 6 07:43:46.885542 systemd-networkd[1229]: cilium_vxlan: Gained IPv6LL Aug 6 07:43:46.889066 systemd-networkd[1229]: lxcd255a53dbb4d: Gained carrier Aug 6 07:43:48.028916 systemd-networkd[1229]: lxcae472c056e91: Gained IPv6LL Aug 6 07:43:48.094025 systemd-networkd[1229]: lxc_health: Gained IPv6LL Aug 6 07:43:48.298749 kubelet[2658]: E0806 07:43:48.297479 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:48.669296 systemd-networkd[1229]: lxcd255a53dbb4d: Gained IPv6LL Aug 6 07:43:52.311989 containerd[1585]: time="2024-08-06T07:43:52.309559060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:52.311989 containerd[1585]: time="2024-08-06T07:43:52.309703100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:52.311989 containerd[1585]: time="2024-08-06T07:43:52.309738319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:52.311989 containerd[1585]: time="2024-08-06T07:43:52.309758230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:52.328030 containerd[1585]: time="2024-08-06T07:43:52.327227957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:43:52.328700 containerd[1585]: time="2024-08-06T07:43:52.327446990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:52.329347 containerd[1585]: time="2024-08-06T07:43:52.329267236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:43:52.329347 containerd[1585]: time="2024-08-06T07:43:52.329306356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:43:52.497310 containerd[1585]: time="2024-08-06T07:43:52.497271531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hk4qj,Uid:31364e8f-9d7e-4366-9a9a-9f03af1cf701,Namespace:kube-system,Attempt:0,} returns sandbox id \"e98b74ab90f4d0ae7d15fce567e2ef9ac7e5a6a14be74f9b97607709462e3059\"" Aug 6 07:43:52.497706 containerd[1585]: time="2024-08-06T07:43:52.497685716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j8dww,Uid:5979ecd6-799e-4eb5-be8e-4b0472aca8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d35dbf176ec4c9839e663ad477f2add824823262574d3ed312a0549c3ffb16a\"" Aug 6 07:43:52.500148 kubelet[2658]: E0806 07:43:52.498705 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:52.501673 kubelet[2658]: E0806 07:43:52.501526 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:52.509055 containerd[1585]: time="2024-08-06T07:43:52.508997087Z" level=info msg="CreateContainer within sandbox \"e98b74ab90f4d0ae7d15fce567e2ef9ac7e5a6a14be74f9b97607709462e3059\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:43:52.509468 containerd[1585]: time="2024-08-06T07:43:52.509311463Z" level=info msg="CreateContainer within sandbox \"0d35dbf176ec4c9839e663ad477f2add824823262574d3ed312a0549c3ffb16a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:43:52.535734 containerd[1585]: time="2024-08-06T07:43:52.535221123Z" level=info msg="CreateContainer within sandbox \"0d35dbf176ec4c9839e663ad477f2add824823262574d3ed312a0549c3ffb16a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2c1f24a2276979de0bf67e33d2c4c67dfab691fc4b3638cecf88748d4b219c9\"" Aug 6 07:43:52.539273 containerd[1585]: time="2024-08-06T07:43:52.537482899Z" level=info msg="CreateContainer within sandbox \"e98b74ab90f4d0ae7d15fce567e2ef9ac7e5a6a14be74f9b97607709462e3059\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c95b2e260afd2736d1938be85dbf1afa1fad9c981592c2044d36b63352681e3d\"" Aug 6 07:43:52.539826 containerd[1585]: time="2024-08-06T07:43:52.539780943Z" level=info msg="StartContainer for \"c95b2e260afd2736d1938be85dbf1afa1fad9c981592c2044d36b63352681e3d\"" Aug 6 07:43:52.540095 containerd[1585]: time="2024-08-06T07:43:52.540018947Z" level=info msg="StartContainer for \"c2c1f24a2276979de0bf67e33d2c4c67dfab691fc4b3638cecf88748d4b219c9\"" Aug 6 07:43:52.639278 containerd[1585]: time="2024-08-06T07:43:52.637894441Z" level=info msg="StartContainer for \"c95b2e260afd2736d1938be85dbf1afa1fad9c981592c2044d36b63352681e3d\" returns successfully" Aug 6 07:43:52.658336 containerd[1585]: time="2024-08-06T07:43:52.657966041Z" level=info msg="StartContainer for \"c2c1f24a2276979de0bf67e33d2c4c67dfab691fc4b3638cecf88748d4b219c9\" returns successfully" Aug 6 07:43:53.345984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633113581.mount: Deactivated successfully. Aug 6 07:43:53.346257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078080586.mount: Deactivated successfully. Aug 6 07:43:53.586212 kubelet[2658]: E0806 07:43:53.585810 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:53.592715 kubelet[2658]: E0806 07:43:53.591722 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:53.608876 kubelet[2658]: I0806 07:43:53.608088 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hk4qj" podStartSLOduration=26.608009122 podCreationTimestamp="2024-08-06 07:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:53.606889755 +0000 UTC m=+38.471193459" watchObservedRunningTime="2024-08-06 07:43:53.608009122 +0000 UTC m=+38.472312831" Aug 6 07:43:54.069147 kubelet[2658]: I0806 07:43:54.069092 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 07:43:54.071870 kubelet[2658]: E0806 07:43:54.071496 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:54.087727 kubelet[2658]: I0806 07:43:54.087561 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-j8dww" podStartSLOduration=27.087513636 podCreationTimestamp="2024-08-06 07:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:43:53.670908102 +0000 UTC m=+38.535211805" watchObservedRunningTime="2024-08-06 07:43:54.087513636 +0000 UTC m=+38.951817401" Aug 6 07:43:54.593563 kubelet[2658]: E0806 07:43:54.592103 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:54.593563 kubelet[2658]: E0806 07:43:54.592896 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:54.595647 kubelet[2658]: E0806 07:43:54.594218 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:55.594678 kubelet[2658]: E0806 07:43:55.594219 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:55.594678 kubelet[2658]: E0806 07:43:55.594464 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:43:55.670728 systemd[1]: Started sshd@8-64.23.156.122:22-139.178.89.65:55724.service - OpenSSH per-connection server daemon (139.178.89.65:55724). Aug 6 07:43:55.735540 sshd[4037]: Accepted publickey for core from 139.178.89.65 port 55724 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:43:55.741807 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:43:55.762995 systemd-logind[1553]: New session 8 of user core. Aug 6 07:43:55.768280 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 6 07:43:56.389175 sshd[4037]: pam_unix(sshd:session): session closed for user core Aug 6 07:43:56.398362 systemd[1]: sshd@8-64.23.156.122:22-139.178.89.65:55724.service: Deactivated successfully. Aug 6 07:43:56.405964 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Aug 6 07:43:56.406120 systemd[1]: session-8.scope: Deactivated successfully. Aug 6 07:43:56.409261 systemd-logind[1553]: Removed session 8. Aug 6 07:44:01.399216 systemd[1]: Started sshd@9-64.23.156.122:22-139.178.89.65:52090.service - OpenSSH per-connection server daemon (139.178.89.65:52090). Aug 6 07:44:01.485376 sshd[4054]: Accepted publickey for core from 139.178.89.65 port 52090 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:01.489006 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:01.512978 systemd-logind[1553]: New session 9 of user core. Aug 6 07:44:01.533510 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 6 07:44:01.873137 sshd[4054]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:01.880214 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Aug 6 07:44:01.882228 systemd[1]: sshd@9-64.23.156.122:22-139.178.89.65:52090.service: Deactivated successfully. Aug 6 07:44:01.890874 systemd[1]: session-9.scope: Deactivated successfully. Aug 6 07:44:01.893864 systemd-logind[1553]: Removed session 9. Aug 6 07:44:06.888800 systemd[1]: Started sshd@10-64.23.156.122:22-139.178.89.65:52104.service - OpenSSH per-connection server daemon (139.178.89.65:52104). Aug 6 07:44:06.948133 sshd[4069]: Accepted publickey for core from 139.178.89.65 port 52104 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:06.950839 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:06.960180 systemd-logind[1553]: New session 10 of user core. Aug 6 07:44:06.965039 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 6 07:44:07.129732 sshd[4069]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:07.136344 systemd[1]: sshd@10-64.23.156.122:22-139.178.89.65:52104.service: Deactivated successfully. Aug 6 07:44:07.142072 systemd[1]: session-10.scope: Deactivated successfully. Aug 6 07:44:07.145600 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Aug 6 07:44:07.147428 systemd-logind[1553]: Removed session 10. Aug 6 07:44:12.139039 systemd[1]: Started sshd@11-64.23.156.122:22-139.178.89.65:55406.service - OpenSSH per-connection server daemon (139.178.89.65:55406). Aug 6 07:44:12.200637 sshd[4084]: Accepted publickey for core from 139.178.89.65 port 55406 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:12.204818 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:12.214370 systemd-logind[1553]: New session 11 of user core. Aug 6 07:44:12.217250 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 6 07:44:12.401920 sshd[4084]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:12.423902 systemd[1]: Started sshd@12-64.23.156.122:22-139.178.89.65:55408.service - OpenSSH per-connection server daemon (139.178.89.65:55408). Aug 6 07:44:12.425144 systemd[1]: sshd@11-64.23.156.122:22-139.178.89.65:55406.service: Deactivated successfully. Aug 6 07:44:12.436611 systemd[1]: session-11.scope: Deactivated successfully. Aug 6 07:44:12.440465 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Aug 6 07:44:12.442753 systemd-logind[1553]: Removed session 11. Aug 6 07:44:12.489863 sshd[4096]: Accepted publickey for core from 139.178.89.65 port 55408 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:12.492794 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:12.500876 systemd-logind[1553]: New session 12 of user core. Aug 6 07:44:12.508361 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 6 07:44:13.700133 sshd[4096]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:13.727266 systemd[1]: Started sshd@13-64.23.156.122:22-139.178.89.65:55412.service - OpenSSH per-connection server daemon (139.178.89.65:55412). Aug 6 07:44:13.727844 systemd[1]: sshd@12-64.23.156.122:22-139.178.89.65:55408.service: Deactivated successfully. Aug 6 07:44:13.753830 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Aug 6 07:44:13.757534 systemd[1]: session-12.scope: Deactivated successfully. Aug 6 07:44:13.768943 systemd-logind[1553]: Removed session 12. Aug 6 07:44:13.836636 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 55412 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:13.838303 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:13.845482 systemd-logind[1553]: New session 13 of user core. Aug 6 07:44:13.849098 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 6 07:44:13.996639 sshd[4108]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:14.000856 systemd[1]: sshd@13-64.23.156.122:22-139.178.89.65:55412.service: Deactivated successfully. Aug 6 07:44:14.007329 systemd[1]: session-13.scope: Deactivated successfully. Aug 6 07:44:14.008037 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Aug 6 07:44:14.011315 systemd-logind[1553]: Removed session 13. Aug 6 07:44:19.010680 systemd[1]: Started sshd@14-64.23.156.122:22-139.178.89.65:55420.service - OpenSSH per-connection server daemon (139.178.89.65:55420). Aug 6 07:44:19.066215 sshd[4128]: Accepted publickey for core from 139.178.89.65 port 55420 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:19.068630 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:19.076163 systemd-logind[1553]: New session 14 of user core. Aug 6 07:44:19.081177 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 6 07:44:19.236522 sshd[4128]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:19.242295 systemd[1]: sshd@14-64.23.156.122:22-139.178.89.65:55420.service: Deactivated successfully. Aug 6 07:44:19.248866 systemd[1]: session-14.scope: Deactivated successfully. Aug 6 07:44:19.250672 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Aug 6 07:44:19.252135 systemd-logind[1553]: Removed session 14. Aug 6 07:44:24.251178 systemd[1]: Started sshd@15-64.23.156.122:22-139.178.89.65:49204.service - OpenSSH per-connection server daemon (139.178.89.65:49204). Aug 6 07:44:24.301869 sshd[4142]: Accepted publickey for core from 139.178.89.65 port 49204 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:24.302777 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:24.310487 systemd-logind[1553]: New session 15 of user core. Aug 6 07:44:24.319993 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 6 07:44:24.466278 sshd[4142]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:24.471811 systemd[1]: sshd@15-64.23.156.122:22-139.178.89.65:49204.service: Deactivated successfully. Aug 6 07:44:24.478050 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Aug 6 07:44:24.478882 systemd[1]: session-15.scope: Deactivated successfully. Aug 6 07:44:24.482051 systemd-logind[1553]: Removed session 15. Aug 6 07:44:29.488089 systemd[1]: Started sshd@16-64.23.156.122:22-139.178.89.65:49214.service - OpenSSH per-connection server daemon (139.178.89.65:49214). Aug 6 07:44:29.538673 sshd[4157]: Accepted publickey for core from 139.178.89.65 port 49214 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:29.541007 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:29.547084 systemd-logind[1553]: New session 16 of user core. Aug 6 07:44:29.555047 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 6 07:44:29.710265 sshd[4157]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:29.720998 systemd[1]: Started sshd@17-64.23.156.122:22-139.178.89.65:49224.service - OpenSSH per-connection server daemon (139.178.89.65:49224). Aug 6 07:44:29.721808 systemd[1]: sshd@16-64.23.156.122:22-139.178.89.65:49214.service: Deactivated successfully. Aug 6 07:44:29.729132 systemd[1]: session-16.scope: Deactivated successfully. Aug 6 07:44:29.732524 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Aug 6 07:44:29.734295 systemd-logind[1553]: Removed session 16. Aug 6 07:44:29.776419 sshd[4169]: Accepted publickey for core from 139.178.89.65 port 49224 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:29.778144 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:29.786277 systemd-logind[1553]: New session 17 of user core. Aug 6 07:44:29.791345 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 6 07:44:30.244947 sshd[4169]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:30.257538 systemd[1]: Started sshd@18-64.23.156.122:22-139.178.89.65:49230.service - OpenSSH per-connection server daemon (139.178.89.65:49230). Aug 6 07:44:30.258344 systemd[1]: sshd@17-64.23.156.122:22-139.178.89.65:49224.service: Deactivated successfully. Aug 6 07:44:30.272459 systemd[1]: session-17.scope: Deactivated successfully. Aug 6 07:44:30.275107 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Aug 6 07:44:30.279236 systemd-logind[1553]: Removed session 17. Aug 6 07:44:30.335593 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 49230 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:30.338068 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:30.346988 systemd-logind[1553]: New session 18 of user core. Aug 6 07:44:30.357259 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 6 07:44:31.325058 sshd[4179]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:31.333051 systemd[1]: Started sshd@19-64.23.156.122:22-139.178.89.65:47360.service - OpenSSH per-connection server daemon (139.178.89.65:47360). Aug 6 07:44:31.339823 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Aug 6 07:44:31.343729 systemd[1]: sshd@18-64.23.156.122:22-139.178.89.65:49230.service: Deactivated successfully. Aug 6 07:44:31.355612 systemd[1]: session-18.scope: Deactivated successfully. Aug 6 07:44:31.365448 systemd-logind[1553]: Removed session 18. Aug 6 07:44:31.414400 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 47360 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:31.416484 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:31.423889 systemd-logind[1553]: New session 19 of user core. Aug 6 07:44:31.429035 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 6 07:44:31.830660 sshd[4198]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:31.855078 systemd[1]: Started sshd@20-64.23.156.122:22-139.178.89.65:47362.service - OpenSSH per-connection server daemon (139.178.89.65:47362). Aug 6 07:44:31.857238 systemd[1]: sshd@19-64.23.156.122:22-139.178.89.65:47360.service: Deactivated successfully. Aug 6 07:44:31.867878 systemd[1]: session-19.scope: Deactivated successfully. Aug 6 07:44:31.870632 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Aug 6 07:44:31.876323 systemd-logind[1553]: Removed session 19. Aug 6 07:44:31.906504 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 47362 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:31.909215 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:31.915957 systemd-logind[1553]: New session 20 of user core. Aug 6 07:44:31.926258 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 6 07:44:32.064841 sshd[4210]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:32.068844 systemd[1]: sshd@20-64.23.156.122:22-139.178.89.65:47362.service: Deactivated successfully. Aug 6 07:44:32.075077 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Aug 6 07:44:32.076188 systemd[1]: session-20.scope: Deactivated successfully. Aug 6 07:44:32.078155 systemd-logind[1553]: Removed session 20. Aug 6 07:44:37.077071 systemd[1]: Started sshd@21-64.23.156.122:22-139.178.89.65:47374.service - OpenSSH per-connection server daemon (139.178.89.65:47374). Aug 6 07:44:37.136629 sshd[4227]: Accepted publickey for core from 139.178.89.65 port 47374 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:37.138371 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:37.144338 systemd-logind[1553]: New session 21 of user core. Aug 6 07:44:37.149572 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 6 07:44:37.305252 sshd[4227]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:37.310901 systemd[1]: sshd@21-64.23.156.122:22-139.178.89.65:47374.service: Deactivated successfully. Aug 6 07:44:37.317701 systemd[1]: session-21.scope: Deactivated successfully. Aug 6 07:44:37.321413 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Aug 6 07:44:37.323640 systemd-logind[1553]: Removed session 21. Aug 6 07:44:38.358620 kubelet[2658]: E0806 07:44:38.358182 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:44:41.357825 kubelet[2658]: E0806 07:44:41.357377 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:44:42.321197 systemd[1]: Started sshd@22-64.23.156.122:22-139.178.89.65:36198.service - OpenSSH per-connection server daemon (139.178.89.65:36198). Aug 6 07:44:42.384398 sshd[4244]: Accepted publickey for core from 139.178.89.65 port 36198 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:42.387319 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:42.394662 systemd-logind[1553]: New session 22 of user core. Aug 6 07:44:42.405202 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 6 07:44:42.562106 sshd[4244]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:42.570808 systemd[1]: sshd@22-64.23.156.122:22-139.178.89.65:36198.service: Deactivated successfully. Aug 6 07:44:42.576967 systemd[1]: session-22.scope: Deactivated successfully. Aug 6 07:44:42.578718 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Aug 6 07:44:42.580919 systemd-logind[1553]: Removed session 22. Aug 6 07:44:46.357527 kubelet[2658]: E0806 07:44:46.357421 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:44:47.580309 systemd[1]: Started sshd@23-64.23.156.122:22-139.178.89.65:36206.service - OpenSSH per-connection server daemon (139.178.89.65:36206). Aug 6 07:44:47.636677 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 36206 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:47.638916 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:47.646144 systemd-logind[1553]: New session 23 of user core. Aug 6 07:44:47.650067 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 6 07:44:47.803752 sshd[4258]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:47.812340 systemd[1]: sshd@23-64.23.156.122:22-139.178.89.65:36206.service: Deactivated successfully. Aug 6 07:44:47.819049 systemd[1]: session-23.scope: Deactivated successfully. Aug 6 07:44:47.820665 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Aug 6 07:44:47.822360 systemd-logind[1553]: Removed session 23. Aug 6 07:44:51.359525 kubelet[2658]: E0806 07:44:51.359473 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:44:52.817148 systemd[1]: Started sshd@24-64.23.156.122:22-139.178.89.65:57002.service - OpenSSH per-connection server daemon (139.178.89.65:57002). Aug 6 07:44:52.870220 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 57002 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:52.872225 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:52.879090 systemd-logind[1553]: New session 24 of user core. Aug 6 07:44:52.888197 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 6 07:44:53.033950 sshd[4273]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:53.041439 systemd[1]: sshd@24-64.23.156.122:22-139.178.89.65:57002.service: Deactivated successfully. Aug 6 07:44:53.047869 systemd[1]: session-24.scope: Deactivated successfully. Aug 6 07:44:53.049209 systemd-logind[1553]: Session 24 logged out. Waiting for processes to exit. Aug 6 07:44:53.050882 systemd-logind[1553]: Removed session 24. Aug 6 07:44:56.357911 kubelet[2658]: E0806 07:44:56.357848 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:44:58.045438 systemd[1]: Started sshd@25-64.23.156.122:22-139.178.89.65:57008.service - OpenSSH per-connection server daemon (139.178.89.65:57008). Aug 6 07:44:58.094243 sshd[4287]: Accepted publickey for core from 139.178.89.65 port 57008 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:44:58.096673 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:44:58.103327 systemd-logind[1553]: New session 25 of user core. Aug 6 07:44:58.110125 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 6 07:44:58.255974 sshd[4287]: pam_unix(sshd:session): session closed for user core Aug 6 07:44:58.260558 systemd[1]: sshd@25-64.23.156.122:22-139.178.89.65:57008.service: Deactivated successfully. Aug 6 07:44:58.269811 systemd[1]: session-25.scope: Deactivated successfully. Aug 6 07:44:58.271577 systemd-logind[1553]: Session 25 logged out. Waiting for processes to exit. Aug 6 07:44:58.273430 systemd-logind[1553]: Removed session 25. Aug 6 07:44:59.359203 kubelet[2658]: E0806 07:44:59.358712 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:03.273252 systemd[1]: Started sshd@26-64.23.156.122:22-139.178.89.65:59224.service - OpenSSH per-connection server daemon (139.178.89.65:59224). Aug 6 07:45:03.374180 sshd[4303]: Accepted publickey for core from 139.178.89.65 port 59224 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:45:03.377328 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:45:03.387553 systemd-logind[1553]: New session 26 of user core. Aug 6 07:45:03.391106 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 6 07:45:03.596990 sshd[4303]: pam_unix(sshd:session): session closed for user core Aug 6 07:45:03.604490 systemd-logind[1553]: Session 26 logged out. Waiting for processes to exit. Aug 6 07:45:03.608962 systemd[1]: sshd@26-64.23.156.122:22-139.178.89.65:59224.service: Deactivated successfully. Aug 6 07:45:03.624626 systemd[1]: Started sshd@27-64.23.156.122:22-139.178.89.65:59226.service - OpenSSH per-connection server daemon (139.178.89.65:59226). Aug 6 07:45:03.626163 systemd[1]: session-26.scope: Deactivated successfully. Aug 6 07:45:03.634209 systemd-logind[1553]: Removed session 26. Aug 6 07:45:03.707208 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 59226 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:45:03.710361 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:45:03.731562 systemd-logind[1553]: New session 27 of user core. Aug 6 07:45:03.745263 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 6 07:45:05.782034 containerd[1585]: time="2024-08-06T07:45:05.781440285Z" level=info msg="StopContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" with timeout 30 (s)" Aug 6 07:45:05.786752 containerd[1585]: time="2024-08-06T07:45:05.784431054Z" level=info msg="Stop container \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" with signal terminated" Aug 6 07:45:05.794438 containerd[1585]: time="2024-08-06T07:45:05.792457501Z" level=info msg="StopContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" with timeout 2 (s)" Aug 6 07:45:05.794438 containerd[1585]: time="2024-08-06T07:45:05.793806675Z" level=info msg="Stop container \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" with signal terminated" Aug 6 07:45:05.796981 containerd[1585]: time="2024-08-06T07:45:05.796889018Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 07:45:05.824235 systemd-networkd[1229]: lxc_health: Link DOWN Aug 6 07:45:05.824247 systemd-networkd[1229]: lxc_health: Lost carrier Aug 6 07:45:05.922130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67-rootfs.mount: Deactivated successfully. Aug 6 07:45:05.933699 containerd[1585]: time="2024-08-06T07:45:05.933604409Z" level=info msg="shim disconnected" id=53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67 namespace=k8s.io Aug 6 07:45:05.935395 containerd[1585]: time="2024-08-06T07:45:05.934227310Z" level=warning msg="cleaning up after shim disconnected" id=53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67 namespace=k8s.io Aug 6 07:45:05.935395 containerd[1585]: time="2024-08-06T07:45:05.935221002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:05.964344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178-rootfs.mount: Deactivated successfully. Aug 6 07:45:05.970025 containerd[1585]: time="2024-08-06T07:45:05.969658635Z" level=info msg="shim disconnected" id=ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178 namespace=k8s.io Aug 6 07:45:05.970025 containerd[1585]: time="2024-08-06T07:45:05.969741642Z" level=warning msg="cleaning up after shim disconnected" id=ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178 namespace=k8s.io Aug 6 07:45:05.970025 containerd[1585]: time="2024-08-06T07:45:05.969753973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:06.069744 containerd[1585]: time="2024-08-06T07:45:06.069100251Z" level=info msg="StopContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" returns successfully" Aug 6 07:45:06.072451 containerd[1585]: time="2024-08-06T07:45:06.071983218Z" level=info msg="StopContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" returns successfully" Aug 6 07:45:06.072451 containerd[1585]: time="2024-08-06T07:45:06.072031037Z" level=info msg="StopPodSandbox for \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\"" Aug 6 07:45:06.072451 containerd[1585]: time="2024-08-06T07:45:06.072208759Z" level=info msg="Container to stop \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.074943143Z" level=info msg="StopPodSandbox for \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.075132153Z" level=info msg="Container to stop \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.075213817Z" level=info msg="Container to stop \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.075229576Z" level=info msg="Container to stop \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.075246670Z" level=info msg="Container to stop \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.075773 containerd[1585]: time="2024-08-06T07:45:06.075263426Z" level=info msg="Container to stop \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:45:06.078656 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa-shm.mount: Deactivated successfully. Aug 6 07:45:06.086155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca-shm.mount: Deactivated successfully. Aug 6 07:45:06.156387 containerd[1585]: time="2024-08-06T07:45:06.156015893Z" level=info msg="shim disconnected" id=ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca namespace=k8s.io Aug 6 07:45:06.156387 containerd[1585]: time="2024-08-06T07:45:06.156167812Z" level=warning msg="cleaning up after shim disconnected" id=ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca namespace=k8s.io Aug 6 07:45:06.156387 containerd[1585]: time="2024-08-06T07:45:06.156186090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:06.190721 containerd[1585]: time="2024-08-06T07:45:06.190402837Z" level=info msg="shim disconnected" id=8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa namespace=k8s.io Aug 6 07:45:06.190721 containerd[1585]: time="2024-08-06T07:45:06.190497522Z" level=warning msg="cleaning up after shim disconnected" id=8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa namespace=k8s.io Aug 6 07:45:06.190721 containerd[1585]: time="2024-08-06T07:45:06.190514138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:06.206082 containerd[1585]: time="2024-08-06T07:45:06.205830828Z" level=info msg="TearDown network for sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" successfully" Aug 6 07:45:06.206082 containerd[1585]: time="2024-08-06T07:45:06.205898725Z" level=info msg="StopPodSandbox for \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" returns successfully" Aug 6 07:45:06.267914 containerd[1585]: time="2024-08-06T07:45:06.267764992Z" level=info msg="TearDown network for sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" successfully" Aug 6 07:45:06.267914 containerd[1585]: time="2024-08-06T07:45:06.267804636Z" level=info msg="StopPodSandbox for \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" returns successfully" Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378175 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d7231de-f87e-403a-8a31-7dcef96a3150-clustermesh-secrets\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378281 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-hubble-tls\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378324 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-xtables-lock\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378368 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-cilium-config-path\") pod \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\" (UID: \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\") " Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378407 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-config-path\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.380568 kubelet[2658]: I0806 07:45:06.378440 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-bpf-maps\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378478 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmxjj\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-kube-api-access-bmxjj\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378510 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-net\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378548 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ktq2\" (UniqueName: \"kubernetes.io/projected/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-kube-api-access-9ktq2\") pod \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\" (UID: \"a67e97f7-ae7a-4a7e-942a-6f5d7bff829f\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378578 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-lib-modules\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378640 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-kernel\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.381522 kubelet[2658]: I0806 07:45:06.378674 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-run\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.385019 kubelet[2658]: I0806 07:45:06.378701 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-hostproc\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.385019 kubelet[2658]: I0806 07:45:06.378729 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-etc-cni-netd\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.385019 kubelet[2658]: I0806 07:45:06.378756 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-cgroup\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.385019 kubelet[2658]: I0806 07:45:06.378783 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cni-path\") pod \"3d7231de-f87e-403a-8a31-7dcef96a3150\" (UID: \"3d7231de-f87e-403a-8a31-7dcef96a3150\") " Aug 6 07:45:06.390313 kubelet[2658]: I0806 07:45:06.389508 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.391233 kubelet[2658]: I0806 07:45:06.390711 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.394468 kubelet[2658]: I0806 07:45:06.394418 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.394841 kubelet[2658]: I0806 07:45:06.394806 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.395141 kubelet[2658]: I0806 07:45:06.395109 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.395276 kubelet[2658]: I0806 07:45:06.395257 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.395422 kubelet[2658]: I0806 07:45:06.395403 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.395624 kubelet[2658]: I0806 07:45:06.395542 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.397619 kubelet[2658]: I0806 07:45:06.397406 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.398418 kubelet[2658]: I0806 07:45:06.398173 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:45:06.402571 kubelet[2658]: I0806 07:45:06.401978 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-kube-api-access-9ktq2" (OuterVolumeSpecName: "kube-api-access-9ktq2") pod "a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" (UID: "a67e97f7-ae7a-4a7e-942a-6f5d7bff829f"). InnerVolumeSpecName "kube-api-access-9ktq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:45:06.402571 kubelet[2658]: I0806 07:45:06.402034 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 6 07:45:06.410046 kubelet[2658]: I0806 07:45:06.409983 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-kube-api-access-bmxjj" (OuterVolumeSpecName: "kube-api-access-bmxjj") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "kube-api-access-bmxjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:45:06.412105 kubelet[2658]: I0806 07:45:06.411937 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:45:06.413379 kubelet[2658]: I0806 07:45:06.413091 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" (UID: "a67e97f7-ae7a-4a7e-942a-6f5d7bff829f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 6 07:45:06.438213 kubelet[2658]: I0806 07:45:06.438149 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7231de-f87e-403a-8a31-7dcef96a3150-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d7231de-f87e-403a-8a31-7dcef96a3150" (UID: "3d7231de-f87e-403a-8a31-7dcef96a3150"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.479991 2658 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-bpf-maps\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480447 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-config-path\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480491 2658 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bmxjj\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-kube-api-access-bmxjj\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480523 2658 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-net\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480545 2658 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9ktq2\" (UniqueName: \"kubernetes.io/projected/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-kube-api-access-9ktq2\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480564 2658 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-lib-modules\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480628 2658 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-host-proc-sys-kernel\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.480846 kubelet[2658]: I0806 07:45:06.480653 2658 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cni-path\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480672 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-run\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480691 2658 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-hostproc\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480708 2658 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-etc-cni-netd\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480724 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-cilium-cgroup\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480742 2658 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d7231de-f87e-403a-8a31-7dcef96a3150-clustermesh-secrets\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480760 2658 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d7231de-f87e-403a-8a31-7dcef96a3150-hubble-tls\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480778 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f-cilium-config-path\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.481407 kubelet[2658]: I0806 07:45:06.480795 2658 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d7231de-f87e-403a-8a31-7dcef96a3150-xtables-lock\") on node \"ci-4012.1.0-5-8b675ffd7f\" DevicePath \"\"" Aug 6 07:45:06.703925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca-rootfs.mount: Deactivated successfully. Aug 6 07:45:06.704251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa-rootfs.mount: Deactivated successfully. Aug 6 07:45:06.704422 systemd[1]: var-lib-kubelet-pods-a67e97f7\x2dae7a\x2d4a7e\x2d942a\x2d6f5d7bff829f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ktq2.mount: Deactivated successfully. Aug 6 07:45:06.704640 systemd[1]: var-lib-kubelet-pods-3d7231de\x2df87e\x2d403a\x2d8a31\x2d7dcef96a3150-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmxjj.mount: Deactivated successfully. Aug 6 07:45:06.704790 systemd[1]: var-lib-kubelet-pods-3d7231de\x2df87e\x2d403a\x2d8a31\x2d7dcef96a3150-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 6 07:45:06.704942 systemd[1]: var-lib-kubelet-pods-3d7231de\x2df87e\x2d403a\x2d8a31\x2d7dcef96a3150-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 6 07:45:06.798498 kubelet[2658]: I0806 07:45:06.796228 2658 scope.go:117] "RemoveContainer" containerID="53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67" Aug 6 07:45:06.803765 containerd[1585]: time="2024-08-06T07:45:06.803660634Z" level=info msg="RemoveContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\"" Aug 6 07:45:06.811435 containerd[1585]: time="2024-08-06T07:45:06.811210029Z" level=info msg="RemoveContainer for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" returns successfully" Aug 6 07:45:06.812653 kubelet[2658]: I0806 07:45:06.812206 2658 scope.go:117] "RemoveContainer" containerID="53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67" Aug 6 07:45:06.816932 containerd[1585]: time="2024-08-06T07:45:06.816754329Z" level=error msg="ContainerStatus for \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\": not found" Aug 6 07:45:06.817216 kubelet[2658]: E0806 07:45:06.817108 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\": not found" containerID="53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67" Aug 6 07:45:06.840969 kubelet[2658]: I0806 07:45:06.840692 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67"} err="failed to get container status \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\": rpc error: code = NotFound desc = an error occurred when try to find container \"53fc40992df03a1173d914ee769fcdf4d84b9e49c7ee695884f60e94f8a04a67\": not found" Aug 6 07:45:06.840969 kubelet[2658]: I0806 07:45:06.840748 2658 scope.go:117] "RemoveContainer" containerID="ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178" Aug 6 07:45:06.867475 containerd[1585]: time="2024-08-06T07:45:06.867282980Z" level=info msg="RemoveContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\"" Aug 6 07:45:06.884385 containerd[1585]: time="2024-08-06T07:45:06.879433041Z" level=info msg="RemoveContainer for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" returns successfully" Aug 6 07:45:06.885631 kubelet[2658]: I0806 07:45:06.885103 2658 scope.go:117] "RemoveContainer" containerID="faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603" Aug 6 07:45:06.888798 containerd[1585]: time="2024-08-06T07:45:06.888334939Z" level=info msg="RemoveContainer for \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\"" Aug 6 07:45:06.896925 containerd[1585]: time="2024-08-06T07:45:06.896800210Z" level=info msg="RemoveContainer for \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\" returns successfully" Aug 6 07:45:06.898378 kubelet[2658]: I0806 07:45:06.897399 2658 scope.go:117] "RemoveContainer" containerID="143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba" Aug 6 07:45:06.904451 containerd[1585]: time="2024-08-06T07:45:06.903065549Z" level=info msg="RemoveContainer for \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\"" Aug 6 07:45:06.908811 containerd[1585]: time="2024-08-06T07:45:06.908541964Z" level=info msg="RemoveContainer for \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\" returns successfully" Aug 6 07:45:06.909393 kubelet[2658]: I0806 07:45:06.908993 2658 scope.go:117] "RemoveContainer" containerID="99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608" Aug 6 07:45:06.913603 containerd[1585]: time="2024-08-06T07:45:06.913000029Z" level=info msg="RemoveContainer for \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\"" Aug 6 07:45:06.920499 containerd[1585]: time="2024-08-06T07:45:06.920262518Z" level=info msg="RemoveContainer for \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\" returns successfully" Aug 6 07:45:06.920986 kubelet[2658]: I0806 07:45:06.920678 2658 scope.go:117] "RemoveContainer" containerID="34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2" Aug 6 07:45:06.923031 containerd[1585]: time="2024-08-06T07:45:06.922673048Z" level=info msg="RemoveContainer for \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\"" Aug 6 07:45:06.926065 containerd[1585]: time="2024-08-06T07:45:06.926016069Z" level=info msg="RemoveContainer for \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\" returns successfully" Aug 6 07:45:06.927558 kubelet[2658]: I0806 07:45:06.927501 2658 scope.go:117] "RemoveContainer" containerID="ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178" Aug 6 07:45:06.929428 containerd[1585]: time="2024-08-06T07:45:06.928777351Z" level=error msg="ContainerStatus for \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\": not found" Aug 6 07:45:06.929791 kubelet[2658]: E0806 07:45:06.929145 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\": not found" containerID="ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178" Aug 6 07:45:06.929791 kubelet[2658]: I0806 07:45:06.929230 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178"} err="failed to get container status \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba5e838465a1b20837d0780b1384ed03e511b272fa6326f96ff30b829848b178\": not found" Aug 6 07:45:06.929791 kubelet[2658]: I0806 07:45:06.929264 2658 scope.go:117] "RemoveContainer" containerID="faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603" Aug 6 07:45:06.929985 containerd[1585]: time="2024-08-06T07:45:06.929738576Z" level=error msg="ContainerStatus for \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\": not found" Aug 6 07:45:06.931248 kubelet[2658]: E0806 07:45:06.930464 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\": not found" containerID="faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603" Aug 6 07:45:06.931248 kubelet[2658]: I0806 07:45:06.930540 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603"} err="failed to get container status \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\": rpc error: code = NotFound desc = an error occurred when try to find container \"faec5324e73124c1be88e7ed19442bb7af9df603424132abc4d00ff036adc603\": not found" Aug 6 07:45:06.931248 kubelet[2658]: I0806 07:45:06.930568 2658 scope.go:117] "RemoveContainer" containerID="143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba" Aug 6 07:45:06.931684 containerd[1585]: time="2024-08-06T07:45:06.930957852Z" level=error msg="ContainerStatus for \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\": not found" Aug 6 07:45:06.932718 kubelet[2658]: E0806 07:45:06.932006 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\": not found" containerID="143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba" Aug 6 07:45:06.932718 kubelet[2658]: I0806 07:45:06.932189 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba"} err="failed to get container status \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"143a3b26378c20c1d06ca61e15e9e9068550b040ee42d115d02b53380a7b72ba\": not found" Aug 6 07:45:06.932718 kubelet[2658]: I0806 07:45:06.932219 2658 scope.go:117] "RemoveContainer" containerID="99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608" Aug 6 07:45:06.932972 containerd[1585]: time="2024-08-06T07:45:06.932553947Z" level=error msg="ContainerStatus for \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\": not found" Aug 6 07:45:06.933756 kubelet[2658]: E0806 07:45:06.933214 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\": not found" containerID="99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608" Aug 6 07:45:06.933756 kubelet[2658]: I0806 07:45:06.933270 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608"} err="failed to get container status \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\": rpc error: code = NotFound desc = an error occurred when try to find container \"99e634f001f0bc1a33d811589137884610c04afe9b411f585ff84132652ed608\": not found" Aug 6 07:45:06.933756 kubelet[2658]: I0806 07:45:06.933290 2658 scope.go:117] "RemoveContainer" containerID="34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2" Aug 6 07:45:06.934012 containerd[1585]: time="2024-08-06T07:45:06.933639221Z" level=error msg="ContainerStatus for \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\": not found" Aug 6 07:45:06.934073 kubelet[2658]: E0806 07:45:06.933911 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\": not found" containerID="34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2" Aug 6 07:45:06.934073 kubelet[2658]: I0806 07:45:06.933961 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2"} err="failed to get container status \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"34a0c77eb419e0a497c10cd2003d609e9520169677d32fac7cf741c95b74adf2\": not found" Aug 6 07:45:07.361668 kubelet[2658]: I0806 07:45:07.361293 2658 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" path="/var/lib/kubelet/pods/3d7231de-f87e-403a-8a31-7dcef96a3150/volumes" Aug 6 07:45:07.363157 kubelet[2658]: I0806 07:45:07.362915 2658 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" path="/var/lib/kubelet/pods/a67e97f7-ae7a-4a7e-942a-6f5d7bff829f/volumes" Aug 6 07:45:07.538321 sshd[4317]: pam_unix(sshd:session): session closed for user core Aug 6 07:45:07.552217 systemd[1]: Started sshd@28-64.23.156.122:22-139.178.89.65:59240.service - OpenSSH per-connection server daemon (139.178.89.65:59240). Aug 6 07:45:07.553424 systemd[1]: sshd@27-64.23.156.122:22-139.178.89.65:59226.service: Deactivated successfully. Aug 6 07:45:07.566709 systemd[1]: session-27.scope: Deactivated successfully. Aug 6 07:45:07.573534 systemd-logind[1553]: Session 27 logged out. Waiting for processes to exit. Aug 6 07:45:07.578914 systemd-logind[1553]: Removed session 27. Aug 6 07:45:07.661145 sshd[4484]: Accepted publickey for core from 139.178.89.65 port 59240 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:45:07.663867 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:45:07.674128 systemd-logind[1553]: New session 28 of user core. Aug 6 07:45:07.679207 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 6 07:45:08.770607 sshd[4484]: pam_unix(sshd:session): session closed for user core Aug 6 07:45:08.796876 systemd[1]: Started sshd@29-64.23.156.122:22-139.178.89.65:59252.service - OpenSSH per-connection server daemon (139.178.89.65:59252). Aug 6 07:45:08.801759 systemd[1]: sshd@28-64.23.156.122:22-139.178.89.65:59240.service: Deactivated successfully. Aug 6 07:45:08.813659 systemd[1]: session-28.scope: Deactivated successfully. Aug 6 07:45:08.823618 kubelet[2658]: I0806 07:45:08.823541 2658 topology_manager.go:215] "Topology Admit Handler" podUID="a620425c-e490-46d5-aa7e-186cd70c3cfd" podNamespace="kube-system" podName="cilium-spv4r" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823660 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="mount-cgroup" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823680 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="apply-sysctl-overwrites" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823691 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="mount-bpf-fs" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823703 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" containerName="cilium-operator" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823713 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="clean-cilium-state" Aug 6 07:45:08.826077 kubelet[2658]: E0806 07:45:08.823724 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="cilium-agent" Aug 6 07:45:08.826077 kubelet[2658]: I0806 07:45:08.823759 2658 memory_manager.go:346] "RemoveStaleState removing state" podUID="a67e97f7-ae7a-4a7e-942a-6f5d7bff829f" containerName="cilium-operator" Aug 6 07:45:08.826077 kubelet[2658]: I0806 07:45:08.823769 2658 memory_manager.go:346] "RemoveStaleState removing state" podUID="3d7231de-f87e-403a-8a31-7dcef96a3150" containerName="cilium-agent" Aug 6 07:45:08.831832 systemd-logind[1553]: Session 28 logged out. Waiting for processes to exit. Aug 6 07:45:08.851583 systemd-logind[1553]: Removed session 28. Aug 6 07:45:08.860647 kubelet[2658]: W0806 07:45:08.853372 2658 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.860647 kubelet[2658]: E0806 07:45:08.853431 2658 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.860647 kubelet[2658]: W0806 07:45:08.853496 2658 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.860647 kubelet[2658]: E0806 07:45:08.853511 2658 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.860647 kubelet[2658]: W0806 07:45:08.853560 2658 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.861051 kubelet[2658]: E0806 07:45:08.853572 2658 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.861051 kubelet[2658]: W0806 07:45:08.853641 2658 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.861051 kubelet[2658]: E0806 07:45:08.853656 2658 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012.1.0-5-8b675ffd7f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012.1.0-5-8b675ffd7f' and this object Aug 6 07:45:08.896800 sshd[4497]: Accepted publickey for core from 139.178.89.65 port 59252 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:45:08.907935 sshd[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:45:08.931907 systemd-logind[1553]: New session 29 of user core. Aug 6 07:45:08.937099 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003024 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-etc-cni-netd\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003093 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-host-proc-sys-kernel\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003126 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-cni-path\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003154 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-config-path\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003179 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-bpf-maps\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003299 kubelet[2658]: I0806 07:45:09.003199 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-hostproc\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003218 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a620425c-e490-46d5-aa7e-186cd70c3cfd-clustermesh-secrets\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003431 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-host-proc-sys-net\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003485 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a620425c-e490-46d5-aa7e-186cd70c3cfd-hubble-tls\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003507 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-lib-modules\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003559 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnd46\" (UniqueName: \"kubernetes.io/projected/a620425c-e490-46d5-aa7e-186cd70c3cfd-kube-api-access-cnd46\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.003977 kubelet[2658]: I0806 07:45:09.003651 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-cgroup\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.004644 kubelet[2658]: I0806 07:45:09.003672 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-xtables-lock\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.004644 kubelet[2658]: I0806 07:45:09.003692 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-ipsec-secrets\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.004644 kubelet[2658]: I0806 07:45:09.003742 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-run\") pod \"cilium-spv4r\" (UID: \"a620425c-e490-46d5-aa7e-186cd70c3cfd\") " pod="kube-system/cilium-spv4r" Aug 6 07:45:09.012870 sshd[4497]: pam_unix(sshd:session): session closed for user core Aug 6 07:45:09.026045 systemd[1]: Started sshd@30-64.23.156.122:22-139.178.89.65:59268.service - OpenSSH per-connection server daemon (139.178.89.65:59268). Aug 6 07:45:09.027966 systemd[1]: sshd@29-64.23.156.122:22-139.178.89.65:59252.service: Deactivated successfully. Aug 6 07:45:09.033808 systemd[1]: session-29.scope: Deactivated successfully. Aug 6 07:45:09.038205 systemd-logind[1553]: Session 29 logged out. Waiting for processes to exit. Aug 6 07:45:09.042660 systemd-logind[1553]: Removed session 29. Aug 6 07:45:09.089188 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 59268 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:45:09.091631 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:45:09.099701 systemd-logind[1553]: New session 30 of user core. Aug 6 07:45:09.107045 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 6 07:45:10.105193 kubelet[2658]: E0806 07:45:10.105046 2658 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 6 07:45:10.115949 kubelet[2658]: E0806 07:45:10.115866 2658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-config-path podName:a620425c-e490-46d5-aa7e-186cd70c3cfd nodeName:}" failed. No retries permitted until 2024-08-06 07:45:10.605193867 +0000 UTC m=+115.469497574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a620425c-e490-46d5-aa7e-186cd70c3cfd-cilium-config-path") pod "cilium-spv4r" (UID: "a620425c-e490-46d5-aa7e-186cd70c3cfd") : failed to sync configmap cache: timed out waiting for the condition Aug 6 07:45:10.357998 kubelet[2658]: E0806 07:45:10.357781 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:10.593882 kubelet[2658]: E0806 07:45:10.593820 2658 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 6 07:45:10.664212 kubelet[2658]: E0806 07:45:10.663975 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:10.665687 containerd[1585]: time="2024-08-06T07:45:10.665017883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spv4r,Uid:a620425c-e490-46d5-aa7e-186cd70c3cfd,Namespace:kube-system,Attempt:0,}" Aug 6 07:45:10.705840 containerd[1585]: time="2024-08-06T07:45:10.705657764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:45:10.707494 containerd[1585]: time="2024-08-06T07:45:10.707148701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:45:10.707494 containerd[1585]: time="2024-08-06T07:45:10.707205465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:45:10.707494 containerd[1585]: time="2024-08-06T07:45:10.707222744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:45:10.775165 containerd[1585]: time="2024-08-06T07:45:10.775122786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spv4r,Uid:a620425c-e490-46d5-aa7e-186cd70c3cfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\"" Aug 6 07:45:10.776983 kubelet[2658]: E0806 07:45:10.776540 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:10.785245 containerd[1585]: time="2024-08-06T07:45:10.784462218Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 6 07:45:10.804793 containerd[1585]: time="2024-08-06T07:45:10.804519641Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83986bf321cc5f65c22f6c75a506558b1360729311a26e870d034b968a5b026d\"" Aug 6 07:45:10.806040 containerd[1585]: time="2024-08-06T07:45:10.805975316Z" level=info msg="StartContainer for \"83986bf321cc5f65c22f6c75a506558b1360729311a26e870d034b968a5b026d\"" Aug 6 07:45:10.807563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537723999.mount: Deactivated successfully. Aug 6 07:45:10.906711 containerd[1585]: time="2024-08-06T07:45:10.906626334Z" level=info msg="StartContainer for \"83986bf321cc5f65c22f6c75a506558b1360729311a26e870d034b968a5b026d\" returns successfully" Aug 6 07:45:10.971300 containerd[1585]: time="2024-08-06T07:45:10.971029644Z" level=info msg="shim disconnected" id=83986bf321cc5f65c22f6c75a506558b1360729311a26e870d034b968a5b026d namespace=k8s.io Aug 6 07:45:10.971300 containerd[1585]: time="2024-08-06T07:45:10.971123426Z" level=warning msg="cleaning up after shim disconnected" id=83986bf321cc5f65c22f6c75a506558b1360729311a26e870d034b968a5b026d namespace=k8s.io Aug 6 07:45:10.971300 containerd[1585]: time="2024-08-06T07:45:10.971140322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:11.870273 kubelet[2658]: E0806 07:45:11.869959 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:11.879329 containerd[1585]: time="2024-08-06T07:45:11.877192102Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 6 07:45:11.894922 containerd[1585]: time="2024-08-06T07:45:11.894853046Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470\"" Aug 6 07:45:11.898395 containerd[1585]: time="2024-08-06T07:45:11.896731238Z" level=info msg="StartContainer for \"141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470\"" Aug 6 07:45:12.081490 containerd[1585]: time="2024-08-06T07:45:12.081424174Z" level=info msg="StartContainer for \"141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470\" returns successfully" Aug 6 07:45:12.130633 containerd[1585]: time="2024-08-06T07:45:12.128904449Z" level=info msg="shim disconnected" id=141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470 namespace=k8s.io Aug 6 07:45:12.130633 containerd[1585]: time="2024-08-06T07:45:12.128973565Z" level=warning msg="cleaning up after shim disconnected" id=141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470 namespace=k8s.io Aug 6 07:45:12.130633 containerd[1585]: time="2024-08-06T07:45:12.128982757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:12.679737 systemd[1]: run-containerd-runc-k8s.io-141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470-runc.f00cbz.mount: Deactivated successfully. Aug 6 07:45:12.680241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-141ed1c9e0485c4ebbde08ccf2865d17c557e5b633eeeb5661b7a6d110907470-rootfs.mount: Deactivated successfully. Aug 6 07:45:12.876625 kubelet[2658]: E0806 07:45:12.875793 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:12.888671 containerd[1585]: time="2024-08-06T07:45:12.885902526Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 6 07:45:12.925235 containerd[1585]: time="2024-08-06T07:45:12.925153626Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0\"" Aug 6 07:45:12.928654 containerd[1585]: time="2024-08-06T07:45:12.928348774Z" level=info msg="StartContainer for \"c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0\"" Aug 6 07:45:13.033434 containerd[1585]: time="2024-08-06T07:45:13.033372295Z" level=info msg="StartContainer for \"c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0\" returns successfully" Aug 6 07:45:13.082904 containerd[1585]: time="2024-08-06T07:45:13.082796490Z" level=info msg="shim disconnected" id=c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0 namespace=k8s.io Aug 6 07:45:13.082904 containerd[1585]: time="2024-08-06T07:45:13.082896553Z" level=warning msg="cleaning up after shim disconnected" id=c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0 namespace=k8s.io Aug 6 07:45:13.083273 containerd[1585]: time="2024-08-06T07:45:13.082929585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:13.680300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c957df34dd7842e2f1410c80d69eae3151e00e330daa5f9b2a13baf20abb66b0-rootfs.mount: Deactivated successfully. Aug 6 07:45:13.881533 kubelet[2658]: E0806 07:45:13.880961 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:13.889860 containerd[1585]: time="2024-08-06T07:45:13.888858185Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 6 07:45:13.910471 containerd[1585]: time="2024-08-06T07:45:13.906941505Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9\"" Aug 6 07:45:13.910471 containerd[1585]: time="2024-08-06T07:45:13.908363632Z" level=info msg="StartContainer for \"2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9\"" Aug 6 07:45:14.015678 containerd[1585]: time="2024-08-06T07:45:14.015387583Z" level=info msg="StartContainer for \"2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9\" returns successfully" Aug 6 07:45:14.049050 containerd[1585]: time="2024-08-06T07:45:14.048954045Z" level=info msg="shim disconnected" id=2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9 namespace=k8s.io Aug 6 07:45:14.049050 containerd[1585]: time="2024-08-06T07:45:14.049037869Z" level=warning msg="cleaning up after shim disconnected" id=2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9 namespace=k8s.io Aug 6 07:45:14.049050 containerd[1585]: time="2024-08-06T07:45:14.049050060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:45:14.680494 systemd[1]: run-containerd-runc-k8s.io-2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9-runc.BB4XqH.mount: Deactivated successfully. Aug 6 07:45:14.683002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c445cea6a9ca15f6a351eb0b36d012da690739ef4db06185b8caf9417cf76d9-rootfs.mount: Deactivated successfully. Aug 6 07:45:14.888638 kubelet[2658]: E0806 07:45:14.887948 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:14.893084 containerd[1585]: time="2024-08-06T07:45:14.893019096Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 6 07:45:14.923894 containerd[1585]: time="2024-08-06T07:45:14.923770810Z" level=info msg="CreateContainer within sandbox \"87d9ba348024e22864f11283ea2a4d8d4ab768f0dff20edc772fa2657252cffe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"532f32902392902b7a42462563583141ae068900ae80f8df849a00cde553cef3\"" Aug 6 07:45:14.927229 containerd[1585]: time="2024-08-06T07:45:14.925710268Z" level=info msg="StartContainer for \"532f32902392902b7a42462563583141ae068900ae80f8df849a00cde553cef3\"" Aug 6 07:45:15.011834 containerd[1585]: time="2024-08-06T07:45:15.011775209Z" level=info msg="StartContainer for \"532f32902392902b7a42462563583141ae068900ae80f8df849a00cde553cef3\" returns successfully" Aug 6 07:45:15.341910 containerd[1585]: time="2024-08-06T07:45:15.341761943Z" level=info msg="StopPodSandbox for \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\"" Aug 6 07:45:15.342650 containerd[1585]: time="2024-08-06T07:45:15.342537647Z" level=info msg="TearDown network for sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" successfully" Aug 6 07:45:15.342876 containerd[1585]: time="2024-08-06T07:45:15.342776496Z" level=info msg="StopPodSandbox for \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" returns successfully" Aug 6 07:45:15.343828 containerd[1585]: time="2024-08-06T07:45:15.343691029Z" level=info msg="RemovePodSandbox for \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\"" Aug 6 07:45:15.343828 containerd[1585]: time="2024-08-06T07:45:15.343720595Z" level=info msg="Forcibly stopping sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\"" Aug 6 07:45:15.344159 containerd[1585]: time="2024-08-06T07:45:15.343799594Z" level=info msg="TearDown network for sandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" successfully" Aug 6 07:45:15.349338 containerd[1585]: time="2024-08-06T07:45:15.349115502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:45:15.349338 containerd[1585]: time="2024-08-06T07:45:15.349224147Z" level=info msg="RemovePodSandbox \"8e1443b8336ceb26f678e1bafccf1a310ae48f21d2c0185ee8e9fa620e9087fa\" returns successfully" Aug 6 07:45:15.351142 containerd[1585]: time="2024-08-06T07:45:15.350754217Z" level=info msg="StopPodSandbox for \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\"" Aug 6 07:45:15.351142 containerd[1585]: time="2024-08-06T07:45:15.350915718Z" level=info msg="TearDown network for sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" successfully" Aug 6 07:45:15.351142 containerd[1585]: time="2024-08-06T07:45:15.350948749Z" level=info msg="StopPodSandbox for \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" returns successfully" Aug 6 07:45:15.352536 containerd[1585]: time="2024-08-06T07:45:15.351918590Z" level=info msg="RemovePodSandbox for \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\"" Aug 6 07:45:15.352536 containerd[1585]: time="2024-08-06T07:45:15.351950697Z" level=info msg="Forcibly stopping sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\"" Aug 6 07:45:15.352536 containerd[1585]: time="2024-08-06T07:45:15.352013128Z" level=info msg="TearDown network for sandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" successfully" Aug 6 07:45:15.355329 containerd[1585]: time="2024-08-06T07:45:15.355260711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 07:45:15.355572 containerd[1585]: time="2024-08-06T07:45:15.355552109Z" level=info msg="RemovePodSandbox \"ac1f1853232dc534848fb653ba3238623f2f0f4f694cc13cb99287e98008b4ca\" returns successfully" Aug 6 07:45:15.638754 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 6 07:45:15.898291 kubelet[2658]: E0806 07:45:15.895658 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:16.898015 kubelet[2658]: E0806 07:45:16.897898 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:17.360248 kubelet[2658]: E0806 07:45:17.359609 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:17.904881 kubelet[2658]: E0806 07:45:17.904524 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:19.384088 systemd-networkd[1229]: lxc_health: Link UP Aug 6 07:45:19.386744 systemd-networkd[1229]: lxc_health: Gained carrier Aug 6 07:45:20.636977 systemd-networkd[1229]: lxc_health: Gained IPv6LL Aug 6 07:45:20.670009 kubelet[2658]: E0806 07:45:20.669966 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:20.692325 kubelet[2658]: I0806 07:45:20.692263 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-spv4r" podStartSLOduration=12.692186546 podCreationTimestamp="2024-08-06 07:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:45:15.916195108 +0000 UTC m=+120.780498814" watchObservedRunningTime="2024-08-06 07:45:20.692186546 +0000 UTC m=+125.556490251" Aug 6 07:45:20.908954 kubelet[2658]: E0806 07:45:20.908832 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 6 07:45:22.935318 kubelet[2658]: E0806 07:45:22.935270 2658 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47622->127.0.0.1:40497: write tcp 127.0.0.1:47622->127.0.0.1:40497: write: broken pipe Aug 6 07:45:25.047815 systemd[1]: run-containerd-runc-k8s.io-532f32902392902b7a42462563583141ae068900ae80f8df849a00cde553cef3-runc.HClUAg.mount: Deactivated successfully. Aug 6 07:45:25.132195 sshd[4508]: pam_unix(sshd:session): session closed for user core Aug 6 07:45:25.142285 systemd[1]: sshd@30-64.23.156.122:22-139.178.89.65:59268.service: Deactivated successfully. Aug 6 07:45:25.148363 systemd-logind[1553]: Session 30 logged out. Waiting for processes to exit. Aug 6 07:45:25.148483 systemd[1]: session-30.scope: Deactivated successfully. Aug 6 07:45:25.152913 systemd-logind[1553]: Removed session 30.