Jul 2 00:19:28.123664 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:19:28.123737 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:28.123762 kernel: BIOS-provided physical RAM map: Jul 2 00:19:28.123774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:19:28.123875 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:19:28.123887 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:19:28.123900 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jul 2 00:19:28.123911 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jul 2 00:19:28.123922 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:19:28.123939 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:19:28.123950 kernel: NX (Execute Disable) protection: active Jul 2 00:19:28.123961 kernel: APIC: Static calls initialized Jul 2 00:19:28.123972 kernel: SMBIOS 2.8 present. Jul 2 00:19:28.123984 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 00:19:28.123997 kernel: Hypervisor detected: KVM Jul 2 00:19:28.124016 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:19:28.124029 kernel: kvm-clock: using sched offset of 4738399138 cycles Jul 2 00:19:28.124044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:19:28.124058 kernel: tsc: Detected 2494.138 MHz processor Jul 2 00:19:28.124074 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:19:28.124090 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:19:28.124106 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jul 2 00:19:28.124122 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:19:28.124138 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:19:28.124158 kernel: ACPI: Early table checksum verification disabled Jul 2 00:19:28.124171 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jul 2 00:19:28.124186 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124200 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124215 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124230 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:19:28.124245 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124259 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124275 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124296 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:19:28.124312 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 00:19:28.124327 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 00:19:28.124341 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:19:28.124355 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 00:19:28.124370 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 00:19:28.124385 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 00:19:28.124412 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 00:19:28.124428 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:19:28.124444 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:19:28.124460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 00:19:28.124477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 00:19:28.124493 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jul 2 00:19:28.124510 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jul 2 00:19:28.124530 kernel: Zone ranges: Jul 2 00:19:28.124546 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:19:28.124563 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jul 2 00:19:28.124578 kernel: Normal empty Jul 2 00:19:28.124617 kernel: Movable zone start for each node Jul 2 00:19:28.124632 kernel: Early memory node ranges Jul 2 00:19:28.124648 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:19:28.124663 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jul 2 00:19:28.124679 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jul 2 00:19:28.124700 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:19:28.124716 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:19:28.124731 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jul 2 00:19:28.124748 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:19:28.124764 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:19:28.124778 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:19:28.124792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:19:28.124808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:19:28.124825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:19:28.124846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:19:28.124863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:19:28.124879 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:19:28.124895 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:19:28.124912 kernel: TSC deadline timer available Jul 2 00:19:28.124928 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:19:28.124945 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:19:28.124961 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:19:28.124978 kernel: Booting paravirtualized kernel on KVM Jul 2 00:19:28.124999 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:19:28.125016 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:19:28.125033 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:19:28.125047 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:19:28.125062 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:19:28.125078 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:19:28.125097 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:28.125113 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:19:28.125133 kernel: random: crng init done Jul 2 00:19:28.125148 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:19:28.125165 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:19:28.125181 kernel: Fallback order for Node 0: 0 Jul 2 00:19:28.125197 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jul 2 00:19:28.125213 kernel: Policy zone: DMA32 Jul 2 00:19:28.125229 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:19:28.125245 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Jul 2 00:19:28.125261 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:19:28.125282 kernel: Kernel/User page tables isolation: enabled Jul 2 00:19:28.125298 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:19:28.125314 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:19:28.125330 kernel: Dynamic Preempt: voluntary Jul 2 00:19:28.125347 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:19:28.125366 kernel: rcu: RCU event tracing is enabled. Jul 2 00:19:28.125383 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:19:28.125400 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:19:28.125415 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:19:28.125435 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:19:28.125452 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:19:28.125468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:19:28.125485 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:19:28.125501 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:19:28.125516 kernel: Console: colour VGA+ 80x25 Jul 2 00:19:28.125532 kernel: printk: console [tty0] enabled Jul 2 00:19:28.125548 kernel: printk: console [ttyS0] enabled Jul 2 00:19:28.125563 kernel: ACPI: Core revision 20230628 Jul 2 00:19:28.125580 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:19:28.127347 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:19:28.127363 kernel: x2apic enabled Jul 2 00:19:28.127376 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:19:28.127389 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:19:28.127403 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 2 00:19:28.127416 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jul 2 00:19:28.127429 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:19:28.127444 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:19:28.127478 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:19:28.127495 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:19:28.127511 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:19:28.127530 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:19:28.127545 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 00:19:28.127562 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:19:28.127577 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:19:28.127611 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 00:19:28.127628 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:19:28.127651 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:19:28.127670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:19:28.127687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:19:28.127705 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:19:28.127723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 00:19:28.127741 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:19:28.127758 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:19:28.127776 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:19:28.127817 kernel: SELinux: Initializing. Jul 2 00:19:28.127834 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:19:28.127852 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:19:28.127869 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 00:19:28.127886 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:28.127904 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:28.127922 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:19:28.127940 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 00:19:28.127961 kernel: signal: max sigframe size: 1776 Jul 2 00:19:28.127979 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:19:28.127998 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:19:28.128016 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:19:28.128033 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:19:28.128051 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:19:28.128069 kernel: .... node #0, CPUs: #1 Jul 2 00:19:28.128086 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:19:28.128104 kernel: smpboot: Max logical packages: 1 Jul 2 00:19:28.128122 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jul 2 00:19:28.128144 kernel: devtmpfs: initialized Jul 2 00:19:28.128162 kernel: x86/mm: Memory block size: 128MB Jul 2 00:19:28.128179 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:19:28.128197 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:19:28.128215 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:19:28.128233 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:19:28.128250 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:19:28.128268 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:19:28.128285 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:19:28.128305 kernel: audit: type=2000 audit(1719879566.774:1): state=initialized audit_enabled=0 res=1 Jul 2 00:19:28.128320 kernel: cpuidle: using governor menu Jul 2 00:19:28.128335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:19:28.128352 kernel: dca service started, version 1.12.1 Jul 2 00:19:28.128369 kernel: PCI: Using configuration type 1 for base access Jul 2 00:19:28.128385 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:19:28.128403 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:19:28.128419 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:19:28.128436 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:19:28.128458 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:19:28.128474 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:19:28.128491 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:19:28.128507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:19:28.128524 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:19:28.128540 kernel: ACPI: Interpreter enabled Jul 2 00:19:28.128556 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:19:28.128572 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:19:28.128606 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:19:28.128631 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:19:28.128647 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:19:28.128665 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:19:28.128995 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:19:28.129182 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:19:28.129357 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:19:28.129382 kernel: acpiphp: Slot [3] registered Jul 2 00:19:28.129409 kernel: acpiphp: Slot [4] registered Jul 2 00:19:28.129426 kernel: acpiphp: Slot [5] registered Jul 2 00:19:28.129443 kernel: acpiphp: Slot [6] registered Jul 2 00:19:28.129460 kernel: acpiphp: Slot [7] registered Jul 2 00:19:28.129477 kernel: acpiphp: Slot [8] registered Jul 2 00:19:28.129495 kernel: acpiphp: Slot [9] registered Jul 2 00:19:28.129512 kernel: acpiphp: Slot [10] registered Jul 2 00:19:28.129530 kernel: acpiphp: Slot [11] registered Jul 2 00:19:28.129547 kernel: acpiphp: Slot [12] registered Jul 2 00:19:28.129570 kernel: acpiphp: Slot [13] registered Jul 2 00:19:28.130255 kernel: acpiphp: Slot [14] registered Jul 2 00:19:28.130280 kernel: acpiphp: Slot [15] registered Jul 2 00:19:28.130296 kernel: acpiphp: Slot [16] registered Jul 2 00:19:28.130314 kernel: acpiphp: Slot [17] registered Jul 2 00:19:28.130331 kernel: acpiphp: Slot [18] registered Jul 2 00:19:28.130345 kernel: acpiphp: Slot [19] registered Jul 2 00:19:28.130360 kernel: acpiphp: Slot [20] registered Jul 2 00:19:28.130377 kernel: acpiphp: Slot [21] registered Jul 2 00:19:28.130393 kernel: acpiphp: Slot [22] registered Jul 2 00:19:28.130421 kernel: acpiphp: Slot [23] registered Jul 2 00:19:28.130438 kernel: acpiphp: Slot [24] registered Jul 2 00:19:28.130455 kernel: acpiphp: Slot [25] registered Jul 2 00:19:28.130472 kernel: acpiphp: Slot [26] registered Jul 2 00:19:28.130489 kernel: acpiphp: Slot [27] registered Jul 2 00:19:28.130506 kernel: acpiphp: Slot [28] registered Jul 2 00:19:28.130524 kernel: acpiphp: Slot [29] registered Jul 2 00:19:28.130541 kernel: acpiphp: Slot [30] registered Jul 2 00:19:28.130558 kernel: acpiphp: Slot [31] registered Jul 2 00:19:28.130576 kernel: PCI host bridge to bus 0000:00 Jul 2 00:19:28.131306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:19:28.131494 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:19:28.131732 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:19:28.133891 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:19:28.134076 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:19:28.134238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:19:28.134450 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:19:28.134743 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:19:28.134905 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:19:28.135095 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 00:19:28.135276 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:19:28.135455 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:19:28.139769 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:19:28.140032 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:19:28.140163 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 00:19:28.140295 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 00:19:28.140423 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:19:28.140603 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:19:28.140785 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:19:28.141011 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:19:28.141182 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:19:28.141342 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:19:28.141486 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 00:19:28.143818 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:19:28.143978 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:19:28.144109 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:19:28.144236 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 00:19:28.144389 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 00:19:28.144534 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:19:28.144723 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:19:28.144836 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 00:19:28.144991 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 00:19:28.145138 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:19:28.145354 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 00:19:28.145491 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 00:19:28.147845 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 00:19:28.148056 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:19:28.148272 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:19:28.148440 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:19:28.148564 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 00:19:28.150901 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:19:28.151077 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:19:28.151220 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 00:19:28.151357 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 00:19:28.151496 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 00:19:28.151683 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:19:28.151845 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 00:19:28.152031 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 00:19:28.152050 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:19:28.152065 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:19:28.152078 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:19:28.152091 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:19:28.152104 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:19:28.152118 kernel: iommu: Default domain type: Translated Jul 2 00:19:28.152138 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:19:28.152152 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:19:28.152165 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:19:28.152178 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:19:28.152191 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jul 2 00:19:28.152341 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:19:28.152484 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:19:28.152641 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:19:28.152659 kernel: vgaarb: loaded Jul 2 00:19:28.152679 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:19:28.152692 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:19:28.152705 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:19:28.152718 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:19:28.152732 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:19:28.152746 kernel: pnp: PnP ACPI init Jul 2 00:19:28.152759 kernel: pnp: PnP ACPI: found 4 devices Jul 2 00:19:28.152773 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:19:28.152786 kernel: NET: Registered PF_INET protocol family Jul 2 00:19:28.152803 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:19:28.152817 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:19:28.152830 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:19:28.152844 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:19:28.152857 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:19:28.152871 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:19:28.152884 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:19:28.152897 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:19:28.152910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:19:28.152928 kernel: NET: Registered PF_XDP protocol family Jul 2 00:19:28.153065 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:19:28.153205 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:19:28.153327 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:19:28.153447 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:19:28.153567 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:19:28.158406 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:19:28.158674 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:19:28.158715 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:19:28.158885 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 37390 usecs Jul 2 00:19:28.158908 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:19:28.158926 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:19:28.158942 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jul 2 00:19:28.158959 kernel: Initialise system trusted keyrings Jul 2 00:19:28.158974 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:19:28.158989 kernel: Key type asymmetric registered Jul 2 00:19:28.159011 kernel: Asymmetric key parser 'x509' registered Jul 2 00:19:28.159026 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:19:28.159041 kernel: io scheduler mq-deadline registered Jul 2 00:19:28.159058 kernel: io scheduler kyber registered Jul 2 00:19:28.159073 kernel: io scheduler bfq registered Jul 2 00:19:28.159086 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:19:28.159096 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:19:28.159107 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:19:28.159116 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:19:28.159126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:19:28.159142 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:19:28.159157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:19:28.159172 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:19:28.159188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:19:28.159429 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 00:19:28.159463 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:19:28.161962 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 00:19:28.162225 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T00:19:27 UTC (1719879567) Jul 2 00:19:28.162353 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:19:28.162376 kernel: intel_pstate: CPU model not supported Jul 2 00:19:28.162395 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:19:28.162413 kernel: Segment Routing with IPv6 Jul 2 00:19:28.162430 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:19:28.162449 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:19:28.162467 kernel: Key type dns_resolver registered Jul 2 00:19:28.162484 kernel: IPI shorthand broadcast: enabled Jul 2 00:19:28.162511 kernel: sched_clock: Marking stable (1345092621, 115336412)->(1521245435, -60816402) Jul 2 00:19:28.162528 kernel: registered taskstats version 1 Jul 2 00:19:28.162546 kernel: Loading compiled-in X.509 certificates Jul 2 00:19:28.162564 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:19:28.162580 kernel: Key type .fscrypt registered Jul 2 00:19:28.162615 kernel: Key type fscrypt-provisioning registered Jul 2 00:19:28.162631 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:19:28.162646 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:19:28.162662 kernel: ima: No architecture policies found Jul 2 00:19:28.162682 kernel: clk: Disabling unused clocks Jul 2 00:19:28.162697 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:19:28.162712 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:19:28.162728 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:19:28.162773 kernel: Run /init as init process Jul 2 00:19:28.162794 kernel: with arguments: Jul 2 00:19:28.162811 kernel: /init Jul 2 00:19:28.162827 kernel: with environment: Jul 2 00:19:28.162843 kernel: HOME=/ Jul 2 00:19:28.162864 kernel: TERM=linux Jul 2 00:19:28.162882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:19:28.162904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:19:28.162927 systemd[1]: Detected virtualization kvm. Jul 2 00:19:28.162947 systemd[1]: Detected architecture x86-64. Jul 2 00:19:28.162963 systemd[1]: Running in initrd. Jul 2 00:19:28.162979 systemd[1]: No hostname configured, using default hostname. Jul 2 00:19:28.162994 systemd[1]: Hostname set to . Jul 2 00:19:28.163019 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:19:28.163037 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:19:28.163055 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:28.163074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:28.163094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:19:28.163109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:19:28.163125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:19:28.163147 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:19:28.163165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:19:28.163181 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:19:28.163197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:28.163209 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:28.163226 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:19:28.163246 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:19:28.163262 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:19:28.163273 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:19:28.163290 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:19:28.163305 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:19:28.163321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:19:28.163337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:19:28.163362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:28.163377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:28.163393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:28.163412 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:19:28.163428 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:19:28.163445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:19:28.163461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:19:28.163479 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:19:28.163496 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:19:28.163507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:19:28.163518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:28.163575 systemd-journald[182]: Collecting audit messages is disabled. Jul 2 00:19:28.165728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:19:28.165749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:28.165766 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:19:28.165785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:19:28.165805 systemd-journald[182]: Journal started Jul 2 00:19:28.165848 systemd-journald[182]: Runtime Journal (/run/log/journal/71aaede33eba48d48e45b1cc72e7529a) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:19:28.136318 systemd-modules-load[183]: Inserted module 'overlay' Jul 2 00:19:28.169346 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:19:28.186889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:19:28.236965 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:19:28.237014 kernel: Bridge firewalling registered Jul 2 00:19:28.212234 systemd-modules-load[183]: Inserted module 'br_netfilter' Jul 2 00:19:28.245195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:28.253074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:28.255326 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:19:28.258146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:28.268194 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:28.275010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:19:28.282927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:19:28.319038 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:28.322539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:28.324076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:28.337023 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:19:28.345966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:19:28.360277 dracut-cmdline[218]: dracut-dracut-053 Jul 2 00:19:28.407652 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:19:28.449752 systemd-resolved[219]: Positive Trust Anchors: Jul 2 00:19:28.449777 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:19:28.449843 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:19:28.454325 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 2 00:19:28.457048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:19:28.457659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:28.608693 kernel: SCSI subsystem initialized Jul 2 00:19:28.639992 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:19:28.663741 kernel: iscsi: registered transport (tcp) Jul 2 00:19:28.704998 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:19:28.705134 kernel: QLogic iSCSI HBA Driver Jul 2 00:19:28.792219 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:19:28.798909 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:19:28.855416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:19:28.855518 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:19:28.856845 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:19:28.932686 kernel: raid6: avx2x4 gen() 16233 MB/s Jul 2 00:19:28.932773 kernel: raid6: avx2x2 gen() 16114 MB/s Jul 2 00:19:28.948814 kernel: raid6: avx2x1 gen() 13754 MB/s Jul 2 00:19:28.948910 kernel: raid6: using algorithm avx2x4 gen() 16233 MB/s Jul 2 00:19:28.966782 kernel: raid6: .... xor() 6720 MB/s, rmw enabled Jul 2 00:19:28.966872 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:19:29.002663 kernel: xor: automatically using best checksumming function avx Jul 2 00:19:29.264091 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:19:29.284181 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:19:29.294972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:29.327573 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 2 00:19:29.339450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:29.350304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:19:29.413778 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jul 2 00:19:29.493818 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:19:29.508250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:19:29.606924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:29.617910 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:19:29.702271 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:19:29.706383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:19:29.707670 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:29.710209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:19:29.732378 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:19:29.758545 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:19:29.799640 kernel: scsi host0: Virtio SCSI HBA Jul 2 00:19:29.833633 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 2 00:19:30.010433 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 00:19:30.010783 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:19:30.010848 kernel: GPT:9289727 != 125829119 Jul 2 00:19:30.010871 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:19:30.010892 kernel: GPT:9289727 != 125829119 Jul 2 00:19:30.010912 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:19:30.010932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:30.010955 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:19:30.010981 kernel: ACPI: bus type USB registered Jul 2 00:19:30.011005 kernel: usbcore: registered new interface driver usbfs Jul 2 00:19:30.011117 kernel: usbcore: registered new interface driver hub Jul 2 00:19:30.011142 kernel: usbcore: registered new device driver usb Jul 2 00:19:29.964964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:19:30.013643 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 2 00:19:30.035232 kernel: libata version 3.00 loaded. Jul 2 00:19:30.035287 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jul 2 00:19:30.035504 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:19:30.035560 kernel: AES CTR mode by8 optimization enabled Jul 2 00:19:29.965281 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:29.985745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:29.986196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:29.994363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:29.995211 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:30.003279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:30.045049 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:19:30.070752 kernel: scsi host1: ata_piix Jul 2 00:19:30.071135 kernel: scsi host2: ata_piix Jul 2 00:19:30.071351 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 00:19:30.071376 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 00:19:30.158171 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 00:19:30.164131 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 00:19:30.164439 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 00:19:30.164859 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 2 00:19:30.165101 kernel: hub 1-0:1.0: USB hub found Jul 2 00:19:30.165310 kernel: hub 1-0:1.0: 2 ports detected Jul 2 00:19:30.187680 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Jul 2 00:19:30.209710 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jul 2 00:19:30.218882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:19:30.225072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:30.239412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:19:30.247890 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:19:30.248648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:19:30.257004 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:19:30.261838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:19:30.286799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:19:30.297860 disk-uuid[539]: Primary Header is updated. Jul 2 00:19:30.297860 disk-uuid[539]: Secondary Entries is updated. Jul 2 00:19:30.297860 disk-uuid[539]: Secondary Header is updated. Jul 2 00:19:30.308644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:30.330993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:30.339460 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:31.335720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:19:31.338079 disk-uuid[540]: The operation has completed successfully. Jul 2 00:19:31.400912 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:19:31.402215 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:19:31.432150 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:19:31.437161 sh[560]: Success Jul 2 00:19:31.458777 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:19:31.545126 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:19:31.563896 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:19:31.565570 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:19:31.588644 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:19:31.588753 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:31.588769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:19:31.588788 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:19:31.588806 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:19:31.600357 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:19:31.602128 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:19:31.616210 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:19:31.619865 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:19:31.637478 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:31.637566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:31.637608 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:31.646641 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:31.677329 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:19:31.678733 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:31.694301 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:19:31.702945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:19:31.855000 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:19:31.887265 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:19:31.921326 systemd-networkd[747]: lo: Link UP Jul 2 00:19:31.922184 systemd-networkd[747]: lo: Gained carrier Jul 2 00:19:31.925618 ignition[671]: Ignition 2.18.0 Jul 2 00:19:31.925637 ignition[671]: Stage: fetch-offline Jul 2 00:19:31.927285 systemd-networkd[747]: Enumeration completed Jul 2 00:19:31.925713 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:31.927426 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:19:31.925730 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:31.928138 systemd[1]: Reached target network.target - Network. Jul 2 00:19:31.925934 ignition[671]: parsed url from cmdline: "" Jul 2 00:19:31.930448 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:19:31.925940 ignition[671]: no config URL provided Jul 2 00:19:31.930453 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 00:19:31.925949 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:19:31.930964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:19:31.925963 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:19:31.933962 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:19:31.925974 ignition[671]: failed to fetch config: resource requires networking Jul 2 00:19:31.933967 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:19:31.926296 ignition[671]: Ignition finished successfully Jul 2 00:19:31.934859 systemd-networkd[747]: eth0: Link UP Jul 2 00:19:31.934866 systemd-networkd[747]: eth0: Gained carrier Jul 2 00:19:31.934880 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 00:19:31.938943 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:19:31.939927 systemd-networkd[747]: eth1: Link UP Jul 2 00:19:31.939932 systemd-networkd[747]: eth1: Gained carrier Jul 2 00:19:31.939946 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:19:31.955744 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Jul 2 00:19:31.963707 systemd-networkd[747]: eth0: DHCPv4 address 64.23.228.240/20, gateway 64.23.224.1 acquired from 169.254.169.253 Jul 2 00:19:31.965623 ignition[756]: Ignition 2.18.0 Jul 2 00:19:31.965637 ignition[756]: Stage: fetch Jul 2 00:19:31.965902 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:31.965922 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:31.966074 ignition[756]: parsed url from cmdline: "" Jul 2 00:19:31.966079 ignition[756]: no config URL provided Jul 2 00:19:31.966086 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:19:31.966097 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:19:31.966124 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 00:19:31.966398 ignition[756]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 2 00:19:32.166985 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Jul 2 00:19:32.184026 ignition[756]: GET result: OK Jul 2 00:19:32.184250 ignition[756]: parsing config with SHA512: a6196e841ee09c80594846c0e4dbf842fac28553aca482644510cbe6f1bc0ec7ba0e1517aa722d879ad1c58da92615f9d587021e82c568c4a956e22455f0ac52 Jul 2 00:19:32.190964 unknown[756]: fetched base config from "system" Jul 2 00:19:32.190976 unknown[756]: fetched base config from "system" Jul 2 00:19:32.192711 ignition[756]: fetch: fetch complete Jul 2 00:19:32.190984 unknown[756]: fetched user config from "digitalocean" Jul 2 00:19:32.192729 ignition[756]: fetch: fetch passed Jul 2 00:19:32.195524 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:19:32.192833 ignition[756]: Ignition finished successfully Jul 2 00:19:32.201023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:19:32.237833 ignition[764]: Ignition 2.18.0 Jul 2 00:19:32.238919 ignition[764]: Stage: kargs Jul 2 00:19:32.240068 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:32.240092 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:32.243645 ignition[764]: kargs: kargs passed Jul 2 00:19:32.244494 ignition[764]: Ignition finished successfully Jul 2 00:19:32.247670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:19:32.252362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:19:32.280787 ignition[771]: Ignition 2.18.0 Jul 2 00:19:32.280805 ignition[771]: Stage: disks Jul 2 00:19:32.281123 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:32.281142 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:32.286129 ignition[771]: disks: disks passed Jul 2 00:19:32.291052 ignition[771]: Ignition finished successfully Jul 2 00:19:32.292860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:19:32.294809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:19:32.296386 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:19:32.298602 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:19:32.299481 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:19:32.300814 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:19:32.307017 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:19:32.334540 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:19:32.344402 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:19:32.354838 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:19:32.534662 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:19:32.537174 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:19:32.539272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:19:32.545916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:19:32.558913 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:19:32.564211 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 2 00:19:32.573945 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:19:32.575694 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:19:32.575834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:19:32.583836 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Jul 2 00:19:32.584676 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:19:32.587622 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:32.591374 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:32.591472 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:32.595208 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:19:32.602955 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:32.616087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:19:32.743686 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:19:32.756379 coreos-metadata[792]: Jul 02 00:19:32.756 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:32.764096 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:19:32.767369 coreos-metadata[791]: Jul 02 00:19:32.767 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:32.771080 coreos-metadata[792]: Jul 02 00:19:32.769 INFO Fetch successful Jul 2 00:19:32.778266 coreos-metadata[792]: Jul 02 00:19:32.778 INFO wrote hostname ci-3975.1.1-9-82cbb2c548 to /sysroot/etc/hostname Jul 2 00:19:32.781199 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:19:32.784117 coreos-metadata[791]: Jul 02 00:19:32.781 INFO Fetch successful Jul 2 00:19:32.782120 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:19:32.804288 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:19:32.798044 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 00:19:32.798332 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 2 00:19:33.150032 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:19:33.160234 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:19:33.169163 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:19:33.208796 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:33.211068 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:19:33.281188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:19:33.298487 ignition[910]: INFO : Ignition 2.18.0 Jul 2 00:19:33.300838 ignition[910]: INFO : Stage: mount Jul 2 00:19:33.300838 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:33.300838 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:33.303939 ignition[910]: INFO : mount: mount passed Jul 2 00:19:33.303939 ignition[910]: INFO : Ignition finished successfully Jul 2 00:19:33.306200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:19:33.316996 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:19:33.324920 systemd-networkd[747]: eth1: Gained IPv6LL Jul 2 00:19:33.372400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:19:33.400697 systemd-networkd[747]: eth0: Gained IPv6LL Jul 2 00:19:33.426991 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jul 2 00:19:33.433962 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:19:33.434095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:19:33.434115 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:19:33.445690 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:19:33.451663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:19:33.526937 ignition[939]: INFO : Ignition 2.18.0 Jul 2 00:19:33.530902 ignition[939]: INFO : Stage: files Jul 2 00:19:33.530902 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:33.530902 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:33.534722 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:19:33.548477 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:19:33.548477 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:19:33.558651 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:19:33.562537 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:19:33.565143 unknown[939]: wrote ssh authorized keys file for user: core Jul 2 00:19:33.567398 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:19:33.582238 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:19:33.582238 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:19:34.509380 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:19:34.655655 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:19:34.655655 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:19:34.660391 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:19:35.143049 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:19:35.378829 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:19:35.378829 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:19:35.386846 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:19:35.683972 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:19:36.454368 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:19:36.454368 ignition[939]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:19:36.460534 ignition[939]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:19:36.462412 ignition[939]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:19:36.462412 ignition[939]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:19:36.462412 ignition[939]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:19:36.462412 ignition[939]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:19:36.462412 ignition[939]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:19:36.462412 ignition[939]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:19:36.462412 ignition[939]: INFO : files: files passed Jul 2 00:19:36.477507 ignition[939]: INFO : Ignition finished successfully Jul 2 00:19:36.465867 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:19:36.480133 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:19:36.499547 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:19:36.521776 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:19:36.522023 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:19:36.560634 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:36.560634 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:36.568798 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:19:36.567766 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:19:36.570289 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:19:36.588666 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:19:36.733820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:19:36.734394 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:19:36.745155 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:19:36.746267 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:19:36.747218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:19:36.750000 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:19:36.834562 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:19:36.860958 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:19:36.901422 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:19:36.901744 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:19:36.909309 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:36.915016 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:36.918290 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:19:36.919079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:19:36.919242 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:19:36.920401 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:19:36.921074 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:19:36.921627 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:19:36.922240 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:19:36.922856 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:19:36.923550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:19:36.924861 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:19:36.935222 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:19:36.937971 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:19:36.939029 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:19:36.940822 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:19:36.941051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:19:36.945908 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:36.947565 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:36.949643 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:19:36.949794 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:36.951539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:19:36.952057 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:19:36.965305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:19:36.965623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:19:36.966573 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:19:36.966787 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:19:36.973211 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:19:36.973542 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:19:36.998389 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:19:37.007233 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:19:37.007408 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:37.012920 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:19:37.013651 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:19:37.013806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:37.031377 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:19:37.031580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:19:37.122828 ignition[993]: INFO : Ignition 2.18.0 Jul 2 00:19:37.122828 ignition[993]: INFO : Stage: umount Jul 2 00:19:37.122828 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:19:37.122828 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 00:19:37.122828 ignition[993]: INFO : umount: umount passed Jul 2 00:19:37.122828 ignition[993]: INFO : Ignition finished successfully Jul 2 00:19:37.135576 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:19:37.138155 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:19:37.170359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:19:37.172231 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:19:37.172459 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:19:37.176526 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:19:37.176761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:19:37.187979 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:19:37.188154 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:19:37.190096 systemd[1]: Stopped target network.target - Network. Jul 2 00:19:37.191303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:19:37.191455 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:19:37.192498 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:19:37.227260 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:19:37.234101 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:37.235266 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:19:37.239100 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:19:37.239873 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:19:37.239998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:19:37.240579 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:19:37.240674 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:19:37.241214 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:19:37.241307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:19:37.241867 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:19:37.241926 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:19:37.242819 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:19:37.243735 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:19:37.244953 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:19:37.245171 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:19:37.247836 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:19:37.248072 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:19:37.277772 systemd-networkd[747]: eth0: DHCPv6 lease lost Jul 2 00:19:37.278198 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:19:37.278405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:19:37.284144 systemd-networkd[747]: eth1: DHCPv6 lease lost Jul 2 00:19:37.286018 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:19:37.286162 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:37.293955 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:19:37.294311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:19:37.297788 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:19:37.297910 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:37.305010 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:19:37.307632 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:19:37.309452 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:19:37.313730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:19:37.313931 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:37.316686 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:19:37.317039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:37.321285 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:37.355075 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:19:37.355938 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:19:37.373911 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:19:37.374304 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:37.379413 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:19:37.379670 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:37.381134 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:19:37.381267 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:37.382107 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:19:37.382225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:19:37.383394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:19:37.383540 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:19:37.387570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:19:37.388008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:19:37.399306 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:19:37.412393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:19:37.412618 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:37.414411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:37.414572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:37.445958 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:19:37.446229 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:19:37.451109 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:19:37.461540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:19:37.503372 systemd[1]: Switching root. Jul 2 00:19:37.556606 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jul 2 00:19:37.556769 systemd-journald[182]: Journal stopped Jul 2 00:19:40.145003 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:19:40.145137 kernel: SELinux: policy capability open_perms=1 Jul 2 00:19:40.145160 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:19:40.145181 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:19:40.145206 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:19:40.145224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:19:40.145247 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:19:40.145266 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:19:40.145286 kernel: audit: type=1403 audit(1719879578.137:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:19:40.145316 systemd[1]: Successfully loaded SELinux policy in 80.965ms. Jul 2 00:19:40.145364 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.551ms. Jul 2 00:19:40.145390 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:19:40.145414 systemd[1]: Detected virtualization kvm. Jul 2 00:19:40.145442 systemd[1]: Detected architecture x86-64. Jul 2 00:19:40.145479 systemd[1]: Detected first boot. Jul 2 00:19:40.145503 systemd[1]: Hostname set to . Jul 2 00:19:40.145525 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:19:40.145550 zram_generator::config[1034]: No configuration found. Jul 2 00:19:40.145575 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:19:40.145600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:19:40.145688 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:19:40.145709 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:19:40.145731 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:19:40.145758 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:19:40.145787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:19:40.145809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:19:40.145830 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:19:40.145850 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:19:40.145872 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:19:40.145898 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:19:40.145919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:19:40.145941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:19:40.145969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:19:40.145991 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:19:40.146012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:19:40.146033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:19:40.146053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:19:40.146074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:19:40.146102 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:19:40.146124 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:19:40.146149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:19:40.146170 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:19:40.146192 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:19:40.146215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:19:40.146240 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:19:40.146261 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:19:40.146283 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:19:40.146304 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:19:40.146325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:19:40.146346 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:19:40.146368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:19:40.146391 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:19:40.146412 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:19:40.146437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:19:40.146459 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:19:40.146482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:40.146503 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:19:40.146529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:19:40.146550 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:19:40.146574 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:19:40.151936 systemd[1]: Reached target machines.target - Containers. Jul 2 00:19:40.151982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:19:40.152031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:40.152054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:19:40.152076 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:19:40.152100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:40.152122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:19:40.152144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:40.152167 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:19:40.152189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:40.152217 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:19:40.152241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:19:40.152262 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:19:40.152284 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:19:40.152306 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:19:40.152328 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:19:40.152351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:19:40.152373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:19:40.152393 kernel: loop: module loaded Jul 2 00:19:40.152421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:19:40.152445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:19:40.152482 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:19:40.152504 systemd[1]: Stopped verity-setup.service. Jul 2 00:19:40.152526 kernel: fuse: init (API version 7.39) Jul 2 00:19:40.152551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:40.152574 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:19:40.152597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:19:40.152632 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:19:40.152660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:19:40.152680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:19:40.152701 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:19:40.153110 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:19:40.153155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:19:40.153189 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:19:40.154118 systemd-journald[1102]: Collecting audit messages is disabled. Jul 2 00:19:40.154205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:40.154234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:40.154257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:40.154279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:40.154301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:19:40.154323 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:19:40.154344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:40.154363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:40.154386 systemd-journald[1102]: Journal started Jul 2 00:19:40.154432 systemd-journald[1102]: Runtime Journal (/run/log/journal/71aaede33eba48d48e45b1cc72e7529a) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:19:40.157865 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:19:39.652833 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:19:39.713737 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:19:39.714505 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:19:40.159471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:19:40.161041 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:19:40.164239 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:19:40.264916 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:19:40.318428 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:19:40.328030 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:19:40.330946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:19:40.331025 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:19:40.334820 kernel: ACPI: bus type drm_connector registered Jul 2 00:19:40.339520 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:19:40.356960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:19:40.366133 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:19:40.368093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:40.371240 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:19:40.375932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:19:40.377909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:40.384085 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:19:40.384911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:40.387929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:19:40.392943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:19:40.398320 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:19:40.400223 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:19:40.400487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:19:40.402090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:19:40.405871 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:19:40.414484 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:19:40.439970 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:19:40.462338 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:19:40.467214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:19:40.468761 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:19:40.483877 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:19:40.479059 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:19:40.545879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:19:40.551277 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:19:40.561931 systemd-journald[1102]: Time spent on flushing to /var/log/journal/71aaede33eba48d48e45b1cc72e7529a is 125.390ms for 998 entries. Jul 2 00:19:40.561931 systemd-journald[1102]: System Journal (/var/log/journal/71aaede33eba48d48e45b1cc72e7529a) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:19:40.729244 systemd-journald[1102]: Received client request to flush runtime journal. Jul 2 00:19:40.729363 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 00:19:40.729406 kernel: loop2: detected capacity change from 0 to 8 Jul 2 00:19:40.729436 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:19:40.563821 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:19:40.667323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:19:40.680423 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:19:40.693280 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:19:40.732125 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:19:40.767732 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:19:40.780537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:19:40.798886 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:19:40.811645 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:19:40.850705 kernel: loop5: detected capacity change from 0 to 211296 Jul 2 00:19:40.887086 kernel: loop6: detected capacity change from 0 to 8 Jul 2 00:19:40.895793 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 00:19:40.916568 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 2 00:19:40.917739 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 2 00:19:40.922240 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 2 00:19:40.923128 (sd-merge)[1176]: Merged extensions into '/usr'. Jul 2 00:19:40.946109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:19:40.959713 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:19:40.959753 systemd[1]: Reloading... Jul 2 00:19:41.164991 zram_generator::config[1203]: No configuration found. Jul 2 00:19:41.514693 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:19:41.576165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:41.685122 systemd[1]: Reloading finished in 724 ms. Jul 2 00:19:41.714957 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:19:41.716538 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:19:41.732067 systemd[1]: Starting ensure-sysext.service... Jul 2 00:19:41.747055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:19:41.770152 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:19:41.770181 systemd[1]: Reloading... Jul 2 00:19:41.798681 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:19:41.799905 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:19:41.802287 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:19:41.804475 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 2 00:19:41.804576 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 2 00:19:41.811342 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:19:41.811361 systemd-tmpfiles[1246]: Skipping /boot Jul 2 00:19:41.834797 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:19:41.834820 systemd-tmpfiles[1246]: Skipping /boot Jul 2 00:19:41.915649 zram_generator::config[1268]: No configuration found. Jul 2 00:19:42.172407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:42.253491 systemd[1]: Reloading finished in 482 ms. Jul 2 00:19:42.279471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:19:42.288727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:19:42.312072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:42.318104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:19:42.322195 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:19:42.333042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:19:42.337135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:19:42.345885 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:19:42.356405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.356773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:42.368739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:42.375090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:42.384333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:42.386409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:42.386835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.394781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.395212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:42.395577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:42.395810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.401552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.402039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:42.409129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:19:42.410137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:42.410457 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.422417 systemd[1]: Finished ensure-sysext.service. Jul 2 00:19:42.448912 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:19:42.462952 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:19:42.465397 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:19:42.467197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:42.468699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:42.475297 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:19:42.502118 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:42.502407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:42.504051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:42.513746 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:19:42.521461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:42.522736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:42.531007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:42.537259 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jul 2 00:19:42.546536 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:19:42.552327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:19:42.553767 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:19:42.608512 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:19:42.619637 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:19:42.624127 augenrules[1356]: No rules Jul 2 00:19:42.627141 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:42.633325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:19:42.646955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:19:42.707693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:19:42.719112 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:19:42.720987 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:19:42.795916 systemd-networkd[1368]: lo: Link UP Jul 2 00:19:42.795933 systemd-networkd[1368]: lo: Gained carrier Jul 2 00:19:42.798688 systemd-networkd[1368]: Enumeration completed Jul 2 00:19:42.806171 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:19:42.821227 systemd-resolved[1320]: Positive Trust Anchors: Jul 2 00:19:42.822315 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:19:42.825812 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:19:42.825886 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:19:42.847805 systemd-resolved[1320]: Using system hostname 'ci-3975.1.1-9-82cbb2c548'. Jul 2 00:19:42.859865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:19:42.860926 systemd[1]: Reached target network.target - Network. Jul 2 00:19:42.862616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:19:42.866636 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1381) Jul 2 00:19:42.933071 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:19:42.957802 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 2 00:19:42.959319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:42.960676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:19:42.969944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:19:42.978949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:19:42.990354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:19:43.003535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:19:43.003672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:19:43.003704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:19:43.008303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:19:43.011360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:19:43.010836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:19:43.012062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:19:43.016654 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:19:43.028635 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:19:43.042958 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 00:19:43.051765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1383) Jul 2 00:19:43.060191 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 2 00:19:43.062527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:19:43.062831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:19:43.065275 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:19:43.065948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:19:43.087448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:19:43.138837 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-42:dd:b4:9c:eb:32.network. Jul 2 00:19:43.142754 systemd-networkd[1368]: eth0: Link UP Jul 2 00:19:43.142788 systemd-networkd[1368]: eth0: Gained carrier Jul 2 00:19:43.148882 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:43.160842 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:19:43.162748 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-02:46:5f:a5:60:e0.network. Jul 2 00:19:43.163742 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:43.164574 systemd-networkd[1368]: eth1: Link UP Jul 2 00:19:43.164616 systemd-networkd[1368]: eth1: Gained carrier Jul 2 00:19:43.167394 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:43.169053 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:43.189547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:19:43.202971 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:19:43.244561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:19:43.280270 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:19:43.286218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:43.338638 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:19:43.338802 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:19:43.354683 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:19:43.357702 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:19:43.357898 kernel: [drm] features: -context_init Jul 2 00:19:43.361756 kernel: [drm] number of scanouts: 1 Jul 2 00:19:43.361857 kernel: [drm] number of cap sets: 0 Jul 2 00:19:43.364618 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:19:43.370841 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:19:43.370936 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:19:43.383798 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:19:43.388101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:43.388448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:43.400015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:43.511268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:19:43.512013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:43.532983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:19:43.606671 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:19:43.633687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:19:43.640647 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:19:43.658060 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:19:43.686698 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:19:43.730130 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:19:43.735097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:19:43.736902 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:19:43.737316 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:19:43.737619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:19:43.738135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:19:43.738450 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:19:43.738556 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:19:43.739194 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:19:43.739329 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:19:43.741921 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:19:43.747412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:19:43.751441 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:19:43.762609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:19:43.776183 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:19:43.779228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:19:43.782990 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:19:43.786325 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:19:43.790438 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:19:43.788682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:19:43.788715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:19:43.798992 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:19:43.820934 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:19:43.830962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:19:43.835801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:19:43.841041 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:19:43.844580 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:19:43.855061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:19:43.866823 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:19:43.876909 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:19:43.888020 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:19:43.901995 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:19:43.907040 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:19:43.908152 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:19:43.917106 dbus-daemon[1435]: [system] SELinux support is enabled Jul 2 00:19:43.934979 extend-filesystems[1439]: Found loop4 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found loop5 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found loop6 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found loop7 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda1 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda2 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda3 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found usr Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda4 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda6 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda7 Jul 2 00:19:43.934979 extend-filesystems[1439]: Found vda9 Jul 2 00:19:43.934979 extend-filesystems[1439]: Checking size of /dev/vda9 Jul 2 00:19:43.930855 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:19:44.038839 jq[1436]: false Jul 2 00:19:44.039191 coreos-metadata[1434]: Jul 02 00:19:43.939 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:44.039191 coreos-metadata[1434]: Jul 02 00:19:44.009 INFO Fetch successful Jul 2 00:19:43.956805 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:19:43.960132 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:19:43.983033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:19:44.010285 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:19:44.057529 jq[1449]: true Jul 2 00:19:44.010699 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:19:44.025895 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:19:44.026119 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:19:44.047791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:19:44.048007 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:19:44.050015 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:19:44.050176 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 2 00:19:44.050228 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:19:44.080667 extend-filesystems[1439]: Resized partition /dev/vda9 Jul 2 00:19:44.102229 update_engine[1446]: I0702 00:19:44.099416 1446 main.cc:92] Flatcar Update Engine starting Jul 2 00:19:44.107836 extend-filesystems[1473]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:19:44.117908 update_engine[1446]: I0702 00:19:44.112819 1446 update_check_scheduler.cc:74] Next update check in 2m35s Jul 2 00:19:44.111448 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:19:44.112927 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:19:44.126672 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 00:19:44.124679 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:19:44.132484 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:19:44.138353 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:19:44.153508 tar[1457]: linux-amd64/helm Jul 2 00:19:44.166642 jq[1460]: true Jul 2 00:19:44.247779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1367) Jul 2 00:19:44.256873 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:19:44.286893 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:19:44.457782 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 00:19:44.441804 systemd-logind[1445]: New seat seat0. Jul 2 00:19:44.514997 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:19:44.514997 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 00:19:44.514997 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 00:19:44.558287 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jul 2 00:19:44.558287 extend-filesystems[1439]: Found vdb Jul 2 00:19:44.562661 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:19:44.515671 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:19:44.515711 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:19:44.517636 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:19:44.517945 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:19:44.555136 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:19:44.573786 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:19:44.595675 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:19:44.603453 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:19:44.610532 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:19:44.614428 systemd[1]: Starting sshkeys.service... Jul 2 00:19:44.654714 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:19:44.666250 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:19:44.677204 systemd[1]: Started sshd@0-64.23.228.240:22-43.156.152.211:37694.service - OpenSSH per-connection server daemon (43.156.152.211:37694). Jul 2 00:19:44.695142 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:19:44.705367 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:19:44.731172 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:19:44.731454 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:19:44.745293 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:19:44.802373 coreos-metadata[1521]: Jul 02 00:19:44.800 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 00:19:44.814487 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:19:44.818021 coreos-metadata[1521]: Jul 02 00:19:44.813 INFO Fetch successful Jul 2 00:19:44.830640 unknown[1521]: wrote ssh authorized keys file for user: core Jul 2 00:19:44.840958 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:19:44.847958 systemd-networkd[1368]: eth1: Gained IPv6LL Jul 2 00:19:44.848474 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:44.857243 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:19:44.858475 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:19:44.865978 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:19:44.879697 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:19:44.894208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:44.906324 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:19:44.978653 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:19:44.981817 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:19:44.986277 systemd[1]: Finished sshkeys.service. Jul 2 00:19:45.010805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:19:45.064469 containerd[1470]: time="2024-07-02T00:19:45.064286817Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:19:45.101103 systemd-networkd[1368]: eth0: Gained IPv6LL Jul 2 00:19:45.103548 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:45.152857 containerd[1470]: time="2024-07-02T00:19:45.151491312Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:19:45.154622 containerd[1470]: time="2024-07-02T00:19:45.154292424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.166447 containerd[1470]: time="2024-07-02T00:19:45.166077311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.166693651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167082750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167115239Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167271429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167346869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167367868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.167465067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.168039142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.168073428Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.168090401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:19:45.168617 containerd[1470]: time="2024-07-02T00:19:45.168300119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:19:45.169334 containerd[1470]: time="2024-07-02T00:19:45.168324868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:19:45.169334 containerd[1470]: time="2024-07-02T00:19:45.168416959Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:19:45.169334 containerd[1470]: time="2024-07-02T00:19:45.168438544Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:19:45.199489 containerd[1470]: time="2024-07-02T00:19:45.199417196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200012186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200059889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200115250Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200138023Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200155910Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200177129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200416417Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200444678Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200471436Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200510128Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200554916Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200611165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200630314Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.201621 containerd[1470]: time="2024-07-02T00:19:45.200644099Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.200661687Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.200677804Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.200692850Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.200713126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.200926572Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201317034Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201368008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201393693Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201428495Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201530249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201553558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.202207 containerd[1470]: time="2024-07-02T00:19:45.201568627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.204467 containerd[1470]: time="2024-07-02T00:19:45.204287835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.206689422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.206744295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.206768101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.206788868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.206811516Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.207108772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.207147211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.208994024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.209053592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.209078084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.209108593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.209189903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.210633 containerd[1470]: time="2024-07-02T00:19:45.209210108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:19:45.211303 containerd[1470]: time="2024-07-02T00:19:45.209643900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:19:45.211303 containerd[1470]: time="2024-07-02T00:19:45.209739093Z" level=info msg="Connect containerd service" Jul 2 00:19:45.211303 containerd[1470]: time="2024-07-02T00:19:45.209791677Z" level=info msg="using legacy CRI server" Jul 2 00:19:45.211303 containerd[1470]: time="2024-07-02T00:19:45.209800361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:19:45.211303 containerd[1470]: time="2024-07-02T00:19:45.209901243Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:19:45.212548 containerd[1470]: time="2024-07-02T00:19:45.212485748Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214263650Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214335143Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214355216Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214375360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214382008Z" level=info msg="Start subscribing containerd event" Jul 2 00:19:45.214709 containerd[1470]: time="2024-07-02T00:19:45.214540179Z" level=info msg="Start recovering state" Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.215291814Z" level=info msg="Start event monitor" Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.215348126Z" level=info msg="Start snapshots syncer" Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.215368250Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.215380819Z" level=info msg="Start streaming server" Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.216803405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.216909646Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:19:45.217633 containerd[1470]: time="2024-07-02T00:19:45.217012256Z" level=info msg="containerd successfully booted in 0.161756s" Jul 2 00:19:45.217235 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:19:45.500122 tar[1457]: linux-amd64/LICENSE Jul 2 00:19:45.500122 tar[1457]: linux-amd64/README.md Jul 2 00:19:45.534220 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:19:46.225977 sshd[1520]: Received disconnect from 43.156.152.211 port 37694:11: Bye Bye [preauth] Jul 2 00:19:46.225977 sshd[1520]: Disconnected from authenticating user root 43.156.152.211 port 37694 [preauth] Jul 2 00:19:46.228354 systemd[1]: sshd@0-64.23.228.240:22-43.156.152.211:37694.service: Deactivated successfully. Jul 2 00:19:46.474397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:46.477186 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:19:46.483126 systemd[1]: Startup finished in 1.533s (kernel) + 10.327s (initrd) + 8.423s (userspace) = 20.284s. Jul 2 00:19:46.487490 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:47.599841 kubelet[1562]: E0702 00:19:47.599622 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:47.605343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:47.605895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:47.606254 systemd[1]: kubelet.service: Consumed 1.640s CPU time. Jul 2 00:19:47.618778 systemd[1]: Started sshd@1-64.23.228.240:22-43.134.124.145:52796.service - OpenSSH per-connection server daemon (43.134.124.145:52796). Jul 2 00:19:49.116627 sshd[1575]: Received disconnect from 43.134.124.145 port 52796:11: Bye Bye [preauth] Jul 2 00:19:49.116627 sshd[1575]: Disconnected from authenticating user root 43.134.124.145 port 52796 [preauth] Jul 2 00:19:49.114861 systemd[1]: sshd@1-64.23.228.240:22-43.134.124.145:52796.service: Deactivated successfully. Jul 2 00:19:50.324299 systemd[1]: Started sshd@2-64.23.228.240:22-43.156.68.109:50324.service - OpenSSH per-connection server daemon (43.156.68.109:50324). Jul 2 00:19:51.828659 sshd[1580]: Received disconnect from 43.156.68.109 port 50324:11: Bye Bye [preauth] Jul 2 00:19:51.828659 sshd[1580]: Disconnected from authenticating user root 43.156.68.109 port 50324 [preauth] Jul 2 00:19:51.831176 systemd[1]: sshd@2-64.23.228.240:22-43.156.68.109:50324.service: Deactivated successfully. Jul 2 00:19:53.779186 systemd[1]: Started sshd@3-64.23.228.240:22-147.75.109.163:40012.service - OpenSSH per-connection server daemon (147.75.109.163:40012). Jul 2 00:19:53.842556 sshd[1585]: Accepted publickey for core from 147.75.109.163 port 40012 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:53.846499 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:53.861554 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:19:53.869144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:19:53.873844 systemd-logind[1445]: New session 1 of user core. Jul 2 00:19:53.902566 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:19:53.911279 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:19:53.929449 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.109805 systemd[1589]: Queued start job for default target default.target. Jul 2 00:19:54.121134 systemd[1589]: Created slice app.slice - User Application Slice. Jul 2 00:19:54.121581 systemd[1589]: Reached target paths.target - Paths. Jul 2 00:19:54.121792 systemd[1589]: Reached target timers.target - Timers. Jul 2 00:19:54.124622 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:19:54.150521 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:19:54.151614 systemd[1589]: Reached target sockets.target - Sockets. Jul 2 00:19:54.151859 systemd[1589]: Reached target basic.target - Basic System. Jul 2 00:19:54.152021 systemd[1589]: Reached target default.target - Main User Target. Jul 2 00:19:54.152146 systemd[1589]: Startup finished in 208ms. Jul 2 00:19:54.152348 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:19:54.160982 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:19:54.241780 systemd[1]: Started sshd@4-64.23.228.240:22-147.75.109.163:40016.service - OpenSSH per-connection server daemon (147.75.109.163:40016). Jul 2 00:19:54.311724 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 40016 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:54.314035 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.325302 systemd-logind[1445]: New session 2 of user core. Jul 2 00:19:54.344085 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:19:54.426989 sshd[1600]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:54.445937 systemd[1]: sshd@4-64.23.228.240:22-147.75.109.163:40016.service: Deactivated successfully. Jul 2 00:19:54.449407 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:19:54.453983 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:19:54.460545 systemd[1]: Started sshd@5-64.23.228.240:22-147.75.109.163:40026.service - OpenSSH per-connection server daemon (147.75.109.163:40026). Jul 2 00:19:54.463129 systemd-logind[1445]: Removed session 2. Jul 2 00:19:54.534660 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 40026 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:54.536687 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.548339 systemd-logind[1445]: New session 3 of user core. Jul 2 00:19:54.554144 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:19:54.620117 sshd[1607]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:54.633093 systemd[1]: sshd@5-64.23.228.240:22-147.75.109.163:40026.service: Deactivated successfully. Jul 2 00:19:54.635954 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:19:54.638888 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:19:54.648618 systemd[1]: Started sshd@6-64.23.228.240:22-147.75.109.163:40032.service - OpenSSH per-connection server daemon (147.75.109.163:40032). Jul 2 00:19:54.651749 systemd-logind[1445]: Removed session 3. Jul 2 00:19:54.713515 sshd[1614]: Accepted publickey for core from 147.75.109.163 port 40032 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:54.714208 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.722666 systemd-logind[1445]: New session 4 of user core. Jul 2 00:19:54.730992 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:19:54.802709 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:54.817465 systemd[1]: sshd@6-64.23.228.240:22-147.75.109.163:40032.service: Deactivated successfully. Jul 2 00:19:54.820859 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:19:54.824582 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:19:54.839815 systemd[1]: Started sshd@7-64.23.228.240:22-147.75.109.163:40038.service - OpenSSH per-connection server daemon (147.75.109.163:40038). Jul 2 00:19:54.841783 systemd-logind[1445]: Removed session 4. Jul 2 00:19:54.886646 sshd[1621]: Accepted publickey for core from 147.75.109.163 port 40038 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:54.888314 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:54.894855 systemd-logind[1445]: New session 5 of user core. Jul 2 00:19:54.899979 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:19:54.980669 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:19:54.981753 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:55.000929 sudo[1624]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:55.006279 sshd[1621]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:55.017877 systemd[1]: sshd@7-64.23.228.240:22-147.75.109.163:40038.service: Deactivated successfully. Jul 2 00:19:55.021535 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:19:55.025417 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:19:55.031119 systemd[1]: Started sshd@8-64.23.228.240:22-147.75.109.163:40052.service - OpenSSH per-connection server daemon (147.75.109.163:40052). Jul 2 00:19:55.035352 systemd-logind[1445]: Removed session 5. Jul 2 00:19:55.100648 sshd[1629]: Accepted publickey for core from 147.75.109.163 port 40052 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:55.102731 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:55.114998 systemd-logind[1445]: New session 6 of user core. Jul 2 00:19:55.124974 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:19:55.193306 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:19:55.193829 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:55.199922 sudo[1633]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:55.209112 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:19:55.210044 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:55.244041 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:55.250966 auditctl[1636]: No rules Jul 2 00:19:55.253038 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:19:55.253414 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:55.258179 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:19:55.329315 augenrules[1654]: No rules Jul 2 00:19:55.332437 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:19:55.335393 sudo[1632]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:55.344999 sshd[1629]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:55.377874 systemd[1]: sshd@8-64.23.228.240:22-147.75.109.163:40052.service: Deactivated successfully. Jul 2 00:19:55.386530 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:19:55.388847 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:19:55.396352 systemd[1]: Started sshd@9-64.23.228.240:22-147.75.109.163:40054.service - OpenSSH per-connection server daemon (147.75.109.163:40054). Jul 2 00:19:55.400891 systemd-logind[1445]: Removed session 6. Jul 2 00:19:55.489036 sshd[1662]: Accepted publickey for core from 147.75.109.163 port 40054 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:19:55.493320 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:55.517967 systemd-logind[1445]: New session 7 of user core. Jul 2 00:19:55.526057 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:19:55.613565 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:19:55.614074 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:19:55.947084 (dockerd)[1674]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:19:55.948368 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:19:56.036701 systemd[1]: Started sshd@10-64.23.228.240:22-43.153.223.232:53190.service - OpenSSH per-connection server daemon (43.153.223.232:53190). Jul 2 00:19:56.952274 dockerd[1674]: time="2024-07-02T00:19:56.952175140Z" level=info msg="Starting up" Jul 2 00:19:57.061861 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1993840310-merged.mount: Deactivated successfully. Jul 2 00:19:57.167323 dockerd[1674]: time="2024-07-02T00:19:57.159887538Z" level=info msg="Loading containers: start." Jul 2 00:19:57.296546 sshd[1676]: Invalid user admin from 43.153.223.232 port 53190 Jul 2 00:19:57.546205 sshd[1676]: Received disconnect from 43.153.223.232 port 53190:11: Bye Bye [preauth] Jul 2 00:19:57.546205 sshd[1676]: Disconnected from invalid user admin 43.153.223.232 port 53190 [preauth] Jul 2 00:19:57.554897 kernel: Initializing XFRM netlink socket Jul 2 00:19:57.555157 systemd[1]: sshd@10-64.23.228.240:22-43.153.223.232:53190.service: Deactivated successfully. Jul 2 00:19:57.629998 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jul 2 00:19:57.633881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:19:57.655992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:58.281712 systemd-timesyncd[1334]: Contacted time server 104.131.155.175:123 (2.flatcar.pool.ntp.org). Jul 2 00:19:58.281797 systemd-timesyncd[1334]: Initial clock synchronization to Tue 2024-07-02 00:19:58.281339 UTC. Jul 2 00:19:58.283340 systemd-resolved[1320]: Clock change detected. Flushing caches. Jul 2 00:19:58.492863 systemd-networkd[1368]: docker0: Link UP Jul 2 00:19:58.627791 dockerd[1674]: time="2024-07-02T00:19:58.626801992Z" level=info msg="Loading containers: done." Jul 2 00:19:58.673928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:58.754646 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:58.956476 dockerd[1674]: time="2024-07-02T00:19:58.955821578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:19:58.958289 dockerd[1674]: time="2024-07-02T00:19:58.957097499Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:19:58.958289 dockerd[1674]: time="2024-07-02T00:19:58.958213262Z" level=info msg="Daemon has completed initialization" Jul 2 00:19:59.057453 kubelet[1778]: E0702 00:19:59.057310 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:59.075835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:59.076107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:59.124617 dockerd[1674]: time="2024-07-02T00:19:59.115765548Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:19:59.123210 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:20:01.098231 containerd[1470]: time="2024-07-02T00:20:01.091750126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:20:02.322542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038402698.mount: Deactivated successfully. Jul 2 00:20:08.262831 containerd[1470]: time="2024-07-02T00:20:08.262228515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:08.267369 containerd[1470]: time="2024-07-02T00:20:08.266716170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jul 2 00:20:08.270626 containerd[1470]: time="2024-07-02T00:20:08.269328931Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:08.283743 containerd[1470]: time="2024-07-02T00:20:08.283658951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:08.286786 containerd[1470]: time="2024-07-02T00:20:08.286375119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 7.19455931s" Jul 2 00:20:08.287003 containerd[1470]: time="2024-07-02T00:20:08.286811357Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:20:08.397711 containerd[1470]: time="2024-07-02T00:20:08.397256106Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:20:09.141380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:20:09.161143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:09.432801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:09.452480 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:20:09.790627 kubelet[1902]: E0702 00:20:09.790396 1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:20:09.798185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:20:09.798489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:20:10.048075 systemd[1]: Started sshd@11-64.23.228.240:22-190.181.4.12:55070.service - OpenSSH per-connection server daemon (190.181.4.12:55070). Jul 2 00:20:11.014764 sshd[1915]: Invalid user user from 190.181.4.12 port 55070 Jul 2 00:20:11.198233 sshd[1915]: Received disconnect from 190.181.4.12 port 55070:11: Bye Bye [preauth] Jul 2 00:20:11.198233 sshd[1915]: Disconnected from invalid user user 190.181.4.12 port 55070 [preauth] Jul 2 00:20:11.201137 systemd[1]: sshd@11-64.23.228.240:22-190.181.4.12:55070.service: Deactivated successfully. Jul 2 00:20:12.105431 containerd[1470]: time="2024-07-02T00:20:12.105329565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.107773 containerd[1470]: time="2024-07-02T00:20:12.107646387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jul 2 00:20:12.110315 containerd[1470]: time="2024-07-02T00:20:12.110218949Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.117573 containerd[1470]: time="2024-07-02T00:20:12.117502208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:12.118989 containerd[1470]: time="2024-07-02T00:20:12.118766270Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 3.721450734s" Jul 2 00:20:12.118989 containerd[1470]: time="2024-07-02T00:20:12.118830592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:20:12.171889 containerd[1470]: time="2024-07-02T00:20:12.171776923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:20:12.872983 systemd[1]: Started sshd@12-64.23.228.240:22-43.134.0.65:34250.service - OpenSSH per-connection server daemon (43.134.0.65:34250). Jul 2 00:20:13.726014 containerd[1470]: time="2024-07-02T00:20:13.725951515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:13.728470 containerd[1470]: time="2024-07-02T00:20:13.727862405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jul 2 00:20:13.730451 containerd[1470]: time="2024-07-02T00:20:13.730337928Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:13.736720 containerd[1470]: time="2024-07-02T00:20:13.736476757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:13.739336 containerd[1470]: time="2024-07-02T00:20:13.738798840Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.566945672s" Jul 2 00:20:13.739336 containerd[1470]: time="2024-07-02T00:20:13.738865372Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:20:13.781136 containerd[1470]: time="2024-07-02T00:20:13.781058790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:20:14.380623 sshd[1924]: Received disconnect from 43.134.0.65 port 34250:11: Bye Bye [preauth] Jul 2 00:20:14.380623 sshd[1924]: Disconnected from authenticating user root 43.134.0.65 port 34250 [preauth] Jul 2 00:20:14.384243 systemd[1]: sshd@12-64.23.228.240:22-43.134.0.65:34250.service: Deactivated successfully. Jul 2 00:20:15.589410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141269470.mount: Deactivated successfully. Jul 2 00:20:16.364859 containerd[1470]: time="2024-07-02T00:20:16.364752967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:16.367560 containerd[1470]: time="2024-07-02T00:20:16.367387818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jul 2 00:20:16.370098 containerd[1470]: time="2024-07-02T00:20:16.369231337Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:16.373456 containerd[1470]: time="2024-07-02T00:20:16.373352589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:16.374445 containerd[1470]: time="2024-07-02T00:20:16.374354879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.59323187s" Jul 2 00:20:16.374445 containerd[1470]: time="2024-07-02T00:20:16.374436692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:20:16.411308 containerd[1470]: time="2024-07-02T00:20:16.411202977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:20:17.306068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990314165.mount: Deactivated successfully. Jul 2 00:20:17.393510 systemd[1]: Started sshd@13-64.23.228.240:22-43.163.214.38:36988.service - OpenSSH per-connection server daemon (43.163.214.38:36988). Jul 2 00:20:18.448473 sshd[1958]: Received disconnect from 43.163.214.38 port 36988:11: Bye Bye [preauth] Jul 2 00:20:18.448473 sshd[1958]: Disconnected from authenticating user root 43.163.214.38 port 36988 [preauth] Jul 2 00:20:18.450012 systemd[1]: sshd@13-64.23.228.240:22-43.163.214.38:36988.service: Deactivated successfully. Jul 2 00:20:18.858889 containerd[1470]: time="2024-07-02T00:20:18.858069176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:18.860651 containerd[1470]: time="2024-07-02T00:20:18.860500978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:20:18.863044 containerd[1470]: time="2024-07-02T00:20:18.862971933Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:18.880077 containerd[1470]: time="2024-07-02T00:20:18.879817617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:18.883071 containerd[1470]: time="2024-07-02T00:20:18.883004648Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.471706582s" Jul 2 00:20:18.883071 containerd[1470]: time="2024-07-02T00:20:18.883064606Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:20:18.943007 containerd[1470]: time="2024-07-02T00:20:18.942943168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:20:19.708053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001778273.mount: Deactivated successfully. Jul 2 00:20:19.743461 containerd[1470]: time="2024-07-02T00:20:19.742175233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:19.747724 containerd[1470]: time="2024-07-02T00:20:19.747633937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:20:19.749911 containerd[1470]: time="2024-07-02T00:20:19.749838660Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:19.755771 containerd[1470]: time="2024-07-02T00:20:19.755699995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:19.758387 containerd[1470]: time="2024-07-02T00:20:19.757261389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 813.998849ms" Jul 2 00:20:19.758637 containerd[1470]: time="2024-07-02T00:20:19.758608395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:20:19.897373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:20:19.909326 containerd[1470]: time="2024-07-02T00:20:19.909273390Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:20:19.939665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:20.424360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:20.443166 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:20:20.661216 kubelet[2017]: E0702 00:20:20.655893 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:20:20.661794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:20:20.662053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:20:21.155292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076172542.mount: Deactivated successfully. Jul 2 00:20:24.154677 systemd[1]: Started sshd@14-64.23.228.240:22-107.175.206.68:52014.service - OpenSSH per-connection server daemon (107.175.206.68:52014). Jul 2 00:20:24.813479 sshd[2071]: Received disconnect from 107.175.206.68 port 52014:11: Bye Bye [preauth] Jul 2 00:20:24.813479 sshd[2071]: Disconnected from authenticating user root 107.175.206.68 port 52014 [preauth] Jul 2 00:20:24.812094 systemd[1]: sshd@14-64.23.228.240:22-107.175.206.68:52014.service: Deactivated successfully. Jul 2 00:20:25.010146 systemd[1]: Started sshd@15-64.23.228.240:22-112.6.122.181:50288.service - OpenSSH per-connection server daemon (112.6.122.181:50288). Jul 2 00:20:26.070218 sshd[2076]: Invalid user ubuntu from 112.6.122.181 port 50288 Jul 2 00:20:26.283540 sshd[2076]: Received disconnect from 112.6.122.181 port 50288:11: Bye Bye [preauth] Jul 2 00:20:26.283540 sshd[2076]: Disconnected from invalid user ubuntu 112.6.122.181 port 50288 [preauth] Jul 2 00:20:26.284017 systemd[1]: sshd@15-64.23.228.240:22-112.6.122.181:50288.service: Deactivated successfully. Jul 2 00:20:26.438762 containerd[1470]: time="2024-07-02T00:20:26.438586411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:26.440903 containerd[1470]: time="2024-07-02T00:20:26.440736497Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:20:26.444815 containerd[1470]: time="2024-07-02T00:20:26.444457242Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:26.449970 containerd[1470]: time="2024-07-02T00:20:26.449877483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:26.455021 containerd[1470]: time="2024-07-02T00:20:26.454643495Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.538511764s" Jul 2 00:20:26.455021 containerd[1470]: time="2024-07-02T00:20:26.454734665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:20:26.695951 systemd[1]: Started sshd@16-64.23.228.240:22-103.82.240.189:60864.service - OpenSSH per-connection server daemon (103.82.240.189:60864). Jul 2 00:20:27.684058 sshd[2091]: Invalid user ftpadmin from 103.82.240.189 port 60864 Jul 2 00:20:27.864540 sshd[2091]: Received disconnect from 103.82.240.189 port 60864:11: Bye Bye [preauth] Jul 2 00:20:27.864540 sshd[2091]: Disconnected from invalid user ftpadmin 103.82.240.189 port 60864 [preauth] Jul 2 00:20:27.868914 systemd[1]: sshd@16-64.23.228.240:22-103.82.240.189:60864.service: Deactivated successfully. Jul 2 00:20:29.846182 update_engine[1446]: I0702 00:20:29.846091 1446 update_attempter.cc:509] Updating boot flags... Jul 2 00:20:29.929727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2153) Jul 2 00:20:30.025489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2156) Jul 2 00:20:30.890860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:20:30.900989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:31.021574 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:20:31.022016 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:20:31.022559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:31.035074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:31.084684 systemd[1]: Reloading requested from client PID 2170 ('systemctl') (unit session-7.scope)... Jul 2 00:20:31.084712 systemd[1]: Reloading... Jul 2 00:20:31.230141 zram_generator::config[2208]: No configuration found. Jul 2 00:20:31.489139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:20:31.614186 systemd[1]: Reloading finished in 528 ms. Jul 2 00:20:31.693856 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:20:31.693991 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:20:31.694517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:31.703205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:31.892735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:31.909224 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:20:32.023621 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:20:32.023621 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:20:32.023621 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:20:32.023621 kubelet[2262]: I0702 00:20:32.023708 2262 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:20:32.527172 kubelet[2262]: I0702 00:20:32.527093 2262 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:20:32.527172 kubelet[2262]: I0702 00:20:32.527170 2262 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:20:32.530212 kubelet[2262]: I0702 00:20:32.530146 2262 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:20:32.583465 kubelet[2262]: E0702 00:20:32.583193 2262 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.228.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.583465 kubelet[2262]: I0702 00:20:32.583303 2262 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:20:32.612358 kubelet[2262]: I0702 00:20:32.612290 2262 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:20:32.613664 kubelet[2262]: I0702 00:20:32.613576 2262 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:20:32.615692 kubelet[2262]: I0702 00:20:32.615321 2262 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:20:32.618444 kubelet[2262]: I0702 00:20:32.618332 2262 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:20:32.618444 kubelet[2262]: I0702 00:20:32.618439 2262 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:20:32.618807 kubelet[2262]: I0702 00:20:32.618696 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:20:32.618948 kubelet[2262]: I0702 00:20:32.618917 2262 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:20:32.620613 kubelet[2262]: I0702 00:20:32.618958 2262 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:20:32.620613 kubelet[2262]: I0702 00:20:32.619007 2262 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:20:32.620613 kubelet[2262]: I0702 00:20:32.619033 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:20:32.621703 kubelet[2262]: W0702 00:20:32.621645 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.621840 kubelet[2262]: E0702 00:20:32.621728 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.621937 kubelet[2262]: I0702 00:20:32.621916 2262 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:20:32.627535 kubelet[2262]: I0702 00:20:32.627470 2262 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:20:32.629578 kubelet[2262]: W0702 00:20:32.629492 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://64.23.228.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.629868 kubelet[2262]: E0702 00:20:32.629840 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.228.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.631500 kubelet[2262]: W0702 00:20:32.631410 2262 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:20:32.632574 kubelet[2262]: I0702 00:20:32.632540 2262 server.go:1256] "Started kubelet" Jul 2 00:20:32.634445 kubelet[2262]: I0702 00:20:32.632990 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:20:32.634445 kubelet[2262]: I0702 00:20:32.633360 2262 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:20:32.635031 kubelet[2262]: I0702 00:20:32.634999 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:20:32.641285 kubelet[2262]: E0702 00:20:32.641235 2262 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.228.240:6443/api/v1/namespaces/default/events\": dial tcp 64.23.228.240:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-9-82cbb2c548.17de3d69df7d4276 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-9-82cbb2c548,UID:ci-3975.1.1-9-82cbb2c548,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-9-82cbb2c548,},FirstTimestamp:2024-07-02 00:20:32.632496758 +0000 UTC m=+0.717017567,LastTimestamp:2024-07-02 00:20:32.632496758 +0000 UTC m=+0.717017567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-9-82cbb2c548,}" Jul 2 00:20:32.641536 kubelet[2262]: I0702 00:20:32.641362 2262 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:20:32.645681 kubelet[2262]: I0702 00:20:32.645623 2262 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:20:32.646744 kubelet[2262]: I0702 00:20:32.646707 2262 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:20:32.649986 kubelet[2262]: I0702 00:20:32.649911 2262 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:20:32.650370 kubelet[2262]: I0702 00:20:32.650325 2262 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:20:32.655910 kubelet[2262]: E0702 00:20:32.655842 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.228.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-9-82cbb2c548?timeout=10s\": dial tcp 64.23.228.240:6443: connect: connection refused" interval="200ms" Jul 2 00:20:32.656476 kubelet[2262]: W0702 00:20:32.656077 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://64.23.228.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.656476 kubelet[2262]: E0702 00:20:32.656172 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.228.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.656593 kubelet[2262]: I0702 00:20:32.656533 2262 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:20:32.657067 kubelet[2262]: I0702 00:20:32.656789 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:20:32.660347 kubelet[2262]: E0702 00:20:32.660115 2262 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:20:32.661993 kubelet[2262]: I0702 00:20:32.661722 2262 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:20:32.687369 kubelet[2262]: I0702 00:20:32.686531 2262 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:20:32.687369 kubelet[2262]: I0702 00:20:32.686568 2262 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:20:32.688033 kubelet[2262]: I0702 00:20:32.687901 2262 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:20:32.696851 kubelet[2262]: I0702 00:20:32.696740 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:20:32.699324 kubelet[2262]: I0702 00:20:32.699282 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:20:32.699324 kubelet[2262]: I0702 00:20:32.699338 2262 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:20:32.699831 kubelet[2262]: I0702 00:20:32.699385 2262 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:20:32.699831 kubelet[2262]: E0702 00:20:32.699503 2262 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:20:32.701889 kubelet[2262]: I0702 00:20:32.701855 2262 policy_none.go:49] "None policy: Start" Jul 2 00:20:32.702814 kubelet[2262]: W0702 00:20:32.702759 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://64.23.228.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.703514 kubelet[2262]: E0702 00:20:32.703307 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.228.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:32.704372 kubelet[2262]: I0702 00:20:32.704341 2262 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:20:32.704615 kubelet[2262]: I0702 00:20:32.704390 2262 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:20:32.722120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:20:32.736664 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:20:32.745624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:20:32.749818 kubelet[2262]: I0702 00:20:32.749075 2262 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.749818 kubelet[2262]: E0702 00:20:32.749658 2262 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.228.240:6443/api/v1/nodes\": dial tcp 64.23.228.240:6443: connect: connection refused" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.758980 kubelet[2262]: I0702 00:20:32.758840 2262 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:20:32.759841 kubelet[2262]: I0702 00:20:32.759378 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:20:32.764577 kubelet[2262]: E0702 00:20:32.764527 2262 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:32.802529 kubelet[2262]: I0702 00:20:32.800097 2262 topology_manager.go:215] "Topology Admit Handler" podUID="89593588025c5fd3b774caad25c9c509" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.805339 kubelet[2262]: I0702 00:20:32.804672 2262 topology_manager.go:215] "Topology Admit Handler" podUID="e4c6aadd727932f43ca598ff0a744301" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.806110 kubelet[2262]: I0702 00:20:32.806072 2262 topology_manager.go:215] "Topology Admit Handler" podUID="28fa530f20f203150765d73f0db4d90a" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.821185 systemd[1]: Created slice kubepods-burstable-pod89593588025c5fd3b774caad25c9c509.slice - libcontainer container kubepods-burstable-pod89593588025c5fd3b774caad25c9c509.slice. Jul 2 00:20:32.853041 kubelet[2262]: I0702 00:20:32.851612 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853265 kubelet[2262]: I0702 00:20:32.853082 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853265 kubelet[2262]: I0702 00:20:32.853125 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853265 kubelet[2262]: I0702 00:20:32.853163 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853265 kubelet[2262]: I0702 00:20:32.853201 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853265 kubelet[2262]: I0702 00:20:32.853229 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853495 kubelet[2262]: I0702 00:20:32.853259 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853495 kubelet[2262]: I0702 00:20:32.853293 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.853495 kubelet[2262]: I0702 00:20:32.853343 2262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28fa530f20f203150765d73f0db4d90a-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-9-82cbb2c548\" (UID: \"28fa530f20f203150765d73f0db4d90a\") " pod="kube-system/kube-scheduler-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.857743 kubelet[2262]: E0702 00:20:32.857687 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.228.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-9-82cbb2c548?timeout=10s\": dial tcp 64.23.228.240:6443: connect: connection refused" interval="400ms" Jul 2 00:20:32.859250 systemd[1]: Created slice kubepods-burstable-pode4c6aadd727932f43ca598ff0a744301.slice - libcontainer container kubepods-burstable-pode4c6aadd727932f43ca598ff0a744301.slice. Jul 2 00:20:32.871403 systemd[1]: Created slice kubepods-burstable-pod28fa530f20f203150765d73f0db4d90a.slice - libcontainer container kubepods-burstable-pod28fa530f20f203150765d73f0db4d90a.slice. Jul 2 00:20:32.951455 kubelet[2262]: I0702 00:20:32.951382 2262 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:32.952518 kubelet[2262]: E0702 00:20:32.952472 2262 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.228.240:6443/api/v1/nodes\": dial tcp 64.23.228.240:6443: connect: connection refused" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:33.085295 systemd[1]: Started sshd@17-64.23.228.240:22-43.156.152.211:52778.service - OpenSSH per-connection server daemon (43.156.152.211:52778). Jul 2 00:20:33.152496 kubelet[2262]: E0702 00:20:33.152391 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:33.154065 containerd[1470]: time="2024-07-02T00:20:33.153916724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-9-82cbb2c548,Uid:89593588025c5fd3b774caad25c9c509,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:33.170644 kubelet[2262]: E0702 00:20:33.169284 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:33.175099 containerd[1470]: time="2024-07-02T00:20:33.174810799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-9-82cbb2c548,Uid:e4c6aadd727932f43ca598ff0a744301,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:33.178470 kubelet[2262]: E0702 00:20:33.178391 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:33.182442 containerd[1470]: time="2024-07-02T00:20:33.181990163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-9-82cbb2c548,Uid:28fa530f20f203150765d73f0db4d90a,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:33.259034 kubelet[2262]: E0702 00:20:33.258980 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.228.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-9-82cbb2c548?timeout=10s\": dial tcp 64.23.228.240:6443: connect: connection refused" interval="800ms" Jul 2 00:20:33.354976 kubelet[2262]: I0702 00:20:33.354276 2262 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:33.354976 kubelet[2262]: E0702 00:20:33.354730 2262 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.228.240:6443/api/v1/nodes\": dial tcp 64.23.228.240:6443: connect: connection refused" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:33.539883 kubelet[2262]: W0702 00:20:33.539526 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.539883 kubelet[2262]: E0702 00:20:33.539758 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.600117 kubelet[2262]: W0702 00:20:33.599313 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://64.23.228.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.600117 kubelet[2262]: E0702 00:20:33.599369 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.228.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.614825 kubelet[2262]: W0702 00:20:33.614570 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://64.23.228.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.614825 kubelet[2262]: E0702 00:20:33.614667 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.228.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:33.893459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740895834.mount: Deactivated successfully. Jul 2 00:20:33.908182 containerd[1470]: time="2024-07-02T00:20:33.907978880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:20:33.910750 containerd[1470]: time="2024-07-02T00:20:33.910612625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:20:33.914012 containerd[1470]: time="2024-07-02T00:20:33.912645477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:20:33.914880 containerd[1470]: time="2024-07-02T00:20:33.914739168Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:20:33.917664 containerd[1470]: time="2024-07-02T00:20:33.917172162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:20:33.917664 containerd[1470]: time="2024-07-02T00:20:33.917413732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:20:33.918729 containerd[1470]: time="2024-07-02T00:20:33.918678146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:20:33.926094 containerd[1470]: time="2024-07-02T00:20:33.926024988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:20:33.930552 containerd[1470]: time="2024-07-02T00:20:33.929923978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.80217ms" Jul 2 00:20:33.941027 containerd[1470]: time="2024-07-02T00:20:33.940964539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.402538ms" Jul 2 00:20:33.954450 containerd[1470]: time="2024-07-02T00:20:33.954325916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 779.363318ms" Jul 2 00:20:34.061834 kubelet[2262]: E0702 00:20:34.061205 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.228.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-9-82cbb2c548?timeout=10s\": dial tcp 64.23.228.240:6443: connect: connection refused" interval="1.6s" Jul 2 00:20:34.132217 kubelet[2262]: W0702 00:20:34.131171 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://64.23.228.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:34.132217 kubelet[2262]: E0702 00:20:34.131282 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.228.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:34.158153 kubelet[2262]: I0702 00:20:34.156824 2262 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:34.158153 kubelet[2262]: E0702 00:20:34.157342 2262 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.228.240:6443/api/v1/nodes\": dial tcp 64.23.228.240:6443: connect: connection refused" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:34.332315 kubelet[2262]: E0702 00:20:34.332078 2262 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.228.240:6443/api/v1/namespaces/default/events\": dial tcp 64.23.228.240:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-9-82cbb2c548.17de3d69df7d4276 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-9-82cbb2c548,UID:ci-3975.1.1-9-82cbb2c548,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-9-82cbb2c548,},FirstTimestamp:2024-07-02 00:20:32.632496758 +0000 UTC m=+0.717017567,LastTimestamp:2024-07-02 00:20:32.632496758 +0000 UTC m=+0.717017567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-9-82cbb2c548,}" Jul 2 00:20:34.345059 containerd[1470]: time="2024-07-02T00:20:34.344803014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:34.345059 containerd[1470]: time="2024-07-02T00:20:34.344874588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.345059 containerd[1470]: time="2024-07-02T00:20:34.344892553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:34.345059 containerd[1470]: time="2024-07-02T00:20:34.344922985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.355851 containerd[1470]: time="2024-07-02T00:20:34.355321186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:34.355851 containerd[1470]: time="2024-07-02T00:20:34.355453915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.355851 containerd[1470]: time="2024-07-02T00:20:34.355486125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:34.355851 containerd[1470]: time="2024-07-02T00:20:34.355502205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.370956 containerd[1470]: time="2024-07-02T00:20:34.370650651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:34.372758 containerd[1470]: time="2024-07-02T00:20:34.372340881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.372758 containerd[1470]: time="2024-07-02T00:20:34.372396915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:34.372758 containerd[1470]: time="2024-07-02T00:20:34.372445640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:34.400877 systemd[1]: Started cri-containerd-39ca5b3b62626557e409d9baffaac54fc7f19c6060bfe3b7673cb51ca2a8934c.scope - libcontainer container 39ca5b3b62626557e409d9baffaac54fc7f19c6060bfe3b7673cb51ca2a8934c. Jul 2 00:20:34.433895 systemd[1]: Started cri-containerd-83906ecc64149c3acd25af6666f9e4d3a8ff7d9d2b7d3fc213fcf8f2425df85c.scope - libcontainer container 83906ecc64149c3acd25af6666f9e4d3a8ff7d9d2b7d3fc213fcf8f2425df85c. Jul 2 00:20:34.451862 systemd[1]: Started cri-containerd-f407188638fcd0d19b8691f9bdf8259ec04da81d6bd661d82f2d05b9b6d06000.scope - libcontainer container f407188638fcd0d19b8691f9bdf8259ec04da81d6bd661d82f2d05b9b6d06000. Jul 2 00:20:34.562041 containerd[1470]: time="2024-07-02T00:20:34.561777393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-9-82cbb2c548,Uid:e4c6aadd727932f43ca598ff0a744301,Namespace:kube-system,Attempt:0,} returns sandbox id \"39ca5b3b62626557e409d9baffaac54fc7f19c6060bfe3b7673cb51ca2a8934c\"" Jul 2 00:20:34.566011 kubelet[2262]: E0702 00:20:34.565599 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:34.569297 containerd[1470]: time="2024-07-02T00:20:34.568925997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-9-82cbb2c548,Uid:89593588025c5fd3b774caad25c9c509,Namespace:kube-system,Attempt:0,} returns sandbox id \"f407188638fcd0d19b8691f9bdf8259ec04da81d6bd661d82f2d05b9b6d06000\"" Jul 2 00:20:34.571257 kubelet[2262]: E0702 00:20:34.571010 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:34.578375 containerd[1470]: time="2024-07-02T00:20:34.578292136Z" level=info msg="CreateContainer within sandbox \"39ca5b3b62626557e409d9baffaac54fc7f19c6060bfe3b7673cb51ca2a8934c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:20:34.582612 containerd[1470]: time="2024-07-02T00:20:34.581386440Z" level=info msg="CreateContainer within sandbox \"f407188638fcd0d19b8691f9bdf8259ec04da81d6bd661d82f2d05b9b6d06000\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:20:34.592230 kubelet[2262]: E0702 00:20:34.592168 2262 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.228.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:34.605853 containerd[1470]: time="2024-07-02T00:20:34.605615358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-9-82cbb2c548,Uid:28fa530f20f203150765d73f0db4d90a,Namespace:kube-system,Attempt:0,} returns sandbox id \"83906ecc64149c3acd25af6666f9e4d3a8ff7d9d2b7d3fc213fcf8f2425df85c\"" Jul 2 00:20:34.607561 kubelet[2262]: E0702 00:20:34.607274 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:34.610467 containerd[1470]: time="2024-07-02T00:20:34.610201981Z" level=info msg="CreateContainer within sandbox \"83906ecc64149c3acd25af6666f9e4d3a8ff7d9d2b7d3fc213fcf8f2425df85c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:20:34.626951 sshd[2293]: Received disconnect from 43.156.152.211 port 52778:11: Bye Bye [preauth] Jul 2 00:20:34.626951 sshd[2293]: Disconnected from authenticating user root 43.156.152.211 port 52778 [preauth] Jul 2 00:20:34.632050 systemd[1]: sshd@17-64.23.228.240:22-43.156.152.211:52778.service: Deactivated successfully. Jul 2 00:20:34.671348 containerd[1470]: time="2024-07-02T00:20:34.671092529Z" level=info msg="CreateContainer within sandbox \"39ca5b3b62626557e409d9baffaac54fc7f19c6060bfe3b7673cb51ca2a8934c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a5e42a7ad8c54cc61bdec7c61c14072819171efe2bcdb3dcb127008897a209f\"" Jul 2 00:20:34.672856 containerd[1470]: time="2024-07-02T00:20:34.672800189Z" level=info msg="StartContainer for \"8a5e42a7ad8c54cc61bdec7c61c14072819171efe2bcdb3dcb127008897a209f\"" Jul 2 00:20:34.709085 containerd[1470]: time="2024-07-02T00:20:34.708194377Z" level=info msg="CreateContainer within sandbox \"f407188638fcd0d19b8691f9bdf8259ec04da81d6bd661d82f2d05b9b6d06000\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"faeee8c8e580841925e79e5d73c85dfc7db70a7448b1b04418dda4dc8532dc87\"" Jul 2 00:20:34.709301 containerd[1470]: time="2024-07-02T00:20:34.709257566Z" level=info msg="StartContainer for \"faeee8c8e580841925e79e5d73c85dfc7db70a7448b1b04418dda4dc8532dc87\"" Jul 2 00:20:34.724083 containerd[1470]: time="2024-07-02T00:20:34.723294141Z" level=info msg="CreateContainer within sandbox \"83906ecc64149c3acd25af6666f9e4d3a8ff7d9d2b7d3fc213fcf8f2425df85c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"834dd2f49dfebb27a840898f4af9d17639173d1ed89d886cb7ad30f0cf2f9dcd\"" Jul 2 00:20:34.724083 containerd[1470]: time="2024-07-02T00:20:34.723905651Z" level=info msg="StartContainer for \"834dd2f49dfebb27a840898f4af9d17639173d1ed89d886cb7ad30f0cf2f9dcd\"" Jul 2 00:20:34.725735 systemd[1]: Started cri-containerd-8a5e42a7ad8c54cc61bdec7c61c14072819171efe2bcdb3dcb127008897a209f.scope - libcontainer container 8a5e42a7ad8c54cc61bdec7c61c14072819171efe2bcdb3dcb127008897a209f. Jul 2 00:20:34.806825 systemd[1]: Started cri-containerd-faeee8c8e580841925e79e5d73c85dfc7db70a7448b1b04418dda4dc8532dc87.scope - libcontainer container faeee8c8e580841925e79e5d73c85dfc7db70a7448b1b04418dda4dc8532dc87. Jul 2 00:20:34.817273 systemd[1]: Started cri-containerd-834dd2f49dfebb27a840898f4af9d17639173d1ed89d886cb7ad30f0cf2f9dcd.scope - libcontainer container 834dd2f49dfebb27a840898f4af9d17639173d1ed89d886cb7ad30f0cf2f9dcd. Jul 2 00:20:34.902569 containerd[1470]: time="2024-07-02T00:20:34.902333622Z" level=info msg="StartContainer for \"8a5e42a7ad8c54cc61bdec7c61c14072819171efe2bcdb3dcb127008897a209f\" returns successfully" Jul 2 00:20:34.976736 containerd[1470]: time="2024-07-02T00:20:34.976553037Z" level=info msg="StartContainer for \"faeee8c8e580841925e79e5d73c85dfc7db70a7448b1b04418dda4dc8532dc87\" returns successfully" Jul 2 00:20:34.988174 containerd[1470]: time="2024-07-02T00:20:34.988077047Z" level=info msg="StartContainer for \"834dd2f49dfebb27a840898f4af9d17639173d1ed89d886cb7ad30f0cf2f9dcd\" returns successfully" Jul 2 00:20:35.222811 kubelet[2262]: W0702 00:20:35.222732 2262 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:35.223368 kubelet[2262]: E0702 00:20:35.222844 2262 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.228.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-9-82cbb2c548&limit=500&resourceVersion=0": dial tcp 64.23.228.240:6443: connect: connection refused Jul 2 00:20:35.754956 kubelet[2262]: E0702 00:20:35.754905 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:35.762914 kubelet[2262]: E0702 00:20:35.762324 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:35.762914 kubelet[2262]: I0702 00:20:35.762773 2262 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:35.772458 kubelet[2262]: E0702 00:20:35.771315 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:36.777123 kubelet[2262]: E0702 00:20:36.777070 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:36.778958 kubelet[2262]: E0702 00:20:36.778607 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:36.778958 kubelet[2262]: E0702 00:20:36.778911 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:37.665785 kubelet[2262]: E0702 00:20:37.665701 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-9-82cbb2c548\" not found" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:37.744706 kubelet[2262]: I0702 00:20:37.744322 2262 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:37.771047 kubelet[2262]: E0702 00:20:37.770732 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:37.780884 kubelet[2262]: E0702 00:20:37.780808 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:37.871334 kubelet[2262]: E0702 00:20:37.871262 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:37.972663 kubelet[2262]: E0702 00:20:37.972318 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:38.073481 kubelet[2262]: E0702 00:20:38.073367 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:38.174796 kubelet[2262]: E0702 00:20:38.174636 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:38.276366 kubelet[2262]: E0702 00:20:38.276165 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:38.377265 kubelet[2262]: E0702 00:20:38.377174 2262 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.1.1-9-82cbb2c548\" not found" Jul 2 00:20:38.628244 kubelet[2262]: I0702 00:20:38.627765 2262 apiserver.go:52] "Watching apiserver" Jul 2 00:20:38.651115 kubelet[2262]: I0702 00:20:38.650993 2262 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:20:39.047000 systemd[1]: Started sshd@18-64.23.228.240:22-43.134.124.145:39842.service - OpenSSH per-connection server daemon (43.134.124.145:39842). Jul 2 00:20:39.764259 kubelet[2262]: W0702 00:20:39.764064 2262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:39.764914 kubelet[2262]: E0702 00:20:39.764787 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:39.783290 kubelet[2262]: E0702 00:20:39.783009 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:40.528129 sshd[2543]: Received disconnect from 43.134.124.145 port 39842:11: Bye Bye [preauth] Jul 2 00:20:40.528129 sshd[2543]: Disconnected from authenticating user root 43.134.124.145 port 39842 [preauth] Jul 2 00:20:40.531037 systemd[1]: sshd@18-64.23.228.240:22-43.134.124.145:39842.service: Deactivated successfully. Jul 2 00:20:41.065175 kubelet[2262]: W0702 00:20:41.062557 2262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:41.106842 kubelet[2262]: E0702 00:20:41.105006 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:41.201012 systemd[1]: Reloading requested from client PID 2548 ('systemctl') (unit session-7.scope)... Jul 2 00:20:41.201044 systemd[1]: Reloading... Jul 2 00:20:41.219084 kubelet[2262]: W0702 00:20:41.219025 2262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:41.221883 kubelet[2262]: E0702 00:20:41.221799 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:41.409433 zram_generator::config[2591]: No configuration found. Jul 2 00:20:41.765304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:20:41.796464 kubelet[2262]: E0702 00:20:41.793866 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:41.796884 kubelet[2262]: E0702 00:20:41.796771 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:41.959105 systemd[1]: Reloading finished in 757 ms. Jul 2 00:20:42.050354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:42.074273 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:20:42.075052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:42.075404 systemd[1]: kubelet.service: Consumed 1.312s CPU time, 106.3M memory peak, 0B memory swap peak. Jul 2 00:20:42.094666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:42.587372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:42.633824 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:20:42.825675 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:20:42.827971 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:20:42.827971 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:20:42.827971 kubelet[2635]: I0702 00:20:42.826970 2635 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:20:42.845212 kubelet[2635]: I0702 00:20:42.844887 2635 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:20:42.846532 kubelet[2635]: I0702 00:20:42.846490 2635 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:20:42.847150 kubelet[2635]: I0702 00:20:42.847114 2635 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:20:42.851840 kubelet[2635]: I0702 00:20:42.851370 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:20:42.866029 kubelet[2635]: I0702 00:20:42.865488 2635 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:20:42.895792 sudo[2650]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:20:42.896921 sudo[2650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.902813 2635 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.903464 2635 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.903969 2635 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.904014 2635 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.904031 2635 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:20:42.912175 kubelet[2635]: I0702 00:20:42.904090 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:20:42.912739 kubelet[2635]: I0702 00:20:42.904276 2635 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:20:42.912739 kubelet[2635]: I0702 00:20:42.907478 2635 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:20:42.917588 kubelet[2635]: I0702 00:20:42.915843 2635 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:20:42.917588 kubelet[2635]: I0702 00:20:42.915988 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:20:42.966865 kubelet[2635]: I0702 00:20:42.963879 2635 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:20:42.966865 kubelet[2635]: I0702 00:20:42.964344 2635 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:20:42.966865 kubelet[2635]: I0702 00:20:42.965033 2635 server.go:1256] "Started kubelet" Jul 2 00:20:42.970712 kubelet[2635]: I0702 00:20:42.967547 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:20:43.000545 kubelet[2635]: I0702 00:20:42.982480 2635 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:20:43.000545 kubelet[2635]: I0702 00:20:42.989036 2635 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:20:43.044159 kubelet[2635]: I0702 00:20:43.029922 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:20:43.044159 kubelet[2635]: I0702 00:20:43.030532 2635 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:20:43.044159 kubelet[2635]: I0702 00:20:43.040480 2635 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:20:43.044159 kubelet[2635]: I0702 00:20:43.043961 2635 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:20:43.044518 kubelet[2635]: I0702 00:20:43.044233 2635 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:20:43.048441 kubelet[2635]: E0702 00:20:43.046812 2635 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:20:43.053530 kubelet[2635]: I0702 00:20:43.052390 2635 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:20:43.059747 kubelet[2635]: I0702 00:20:43.056199 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:20:43.066876 kubelet[2635]: I0702 00:20:43.066645 2635 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:20:43.151548 kubelet[2635]: I0702 00:20:43.145457 2635 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.151548 kubelet[2635]: I0702 00:20:43.147628 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:20:43.151548 kubelet[2635]: I0702 00:20:43.150505 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:20:43.151548 kubelet[2635]: I0702 00:20:43.150564 2635 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:20:43.151548 kubelet[2635]: I0702 00:20:43.150595 2635 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:20:43.151548 kubelet[2635]: E0702 00:20:43.150712 2635 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:20:43.221219 kubelet[2635]: I0702 00:20:43.213971 2635 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.221219 kubelet[2635]: I0702 00:20:43.214153 2635 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.252312 kubelet[2635]: E0702 00:20:43.252269 2635 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:20:43.457590 kubelet[2635]: E0702 00:20:43.454816 2635 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:20:43.463191 systemd[1]: Started sshd@19-64.23.228.240:22-43.156.68.109:37300.service - OpenSSH per-connection server daemon (43.156.68.109:37300). Jul 2 00:20:43.476144 kubelet[2635]: I0702 00:20:43.476074 2635 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:20:43.477662 kubelet[2635]: I0702 00:20:43.477623 2635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:20:43.477922 kubelet[2635]: I0702 00:20:43.477910 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:20:43.478465 kubelet[2635]: I0702 00:20:43.478350 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:20:43.478465 kubelet[2635]: I0702 00:20:43.478388 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:20:43.478465 kubelet[2635]: I0702 00:20:43.478399 2635 policy_none.go:49] "None policy: Start" Jul 2 00:20:43.488589 kubelet[2635]: I0702 00:20:43.488376 2635 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:20:43.488589 kubelet[2635]: I0702 00:20:43.488468 2635 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:20:43.491604 kubelet[2635]: I0702 00:20:43.490391 2635 state_mem.go:75] "Updated machine memory state" Jul 2 00:20:43.501500 kubelet[2635]: I0702 00:20:43.501453 2635 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:20:43.504310 kubelet[2635]: I0702 00:20:43.504114 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:20:43.855300 kubelet[2635]: I0702 00:20:43.855159 2635 topology_manager.go:215] "Topology Admit Handler" podUID="e4c6aadd727932f43ca598ff0a744301" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.857564 kubelet[2635]: I0702 00:20:43.857259 2635 topology_manager.go:215] "Topology Admit Handler" podUID="28fa530f20f203150765d73f0db4d90a" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.857564 kubelet[2635]: I0702 00:20:43.857368 2635 topology_manager.go:215] "Topology Admit Handler" podUID="89593588025c5fd3b774caad25c9c509" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.883283 kubelet[2635]: W0702 00:20:43.882596 2635 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:43.883283 kubelet[2635]: E0702 00:20:43.882707 2635 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.886484 kubelet[2635]: W0702 00:20:43.885982 2635 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:43.886484 kubelet[2635]: W0702 00:20:43.886076 2635 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:20:43.886484 kubelet[2635]: E0702 00:20:43.886164 2635 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.1.1-9-82cbb2c548\" already exists" pod="kube-system/kube-scheduler-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.886484 kubelet[2635]: E0702 00:20:43.886337 2635 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" already exists" pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.933945 kubelet[2635]: I0702 00:20:43.933825 2635 apiserver.go:52] "Watching apiserver" Jul 2 00:20:43.944969 kubelet[2635]: I0702 00:20:43.944917 2635 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:20:43.980531 kubelet[2635]: I0702 00:20:43.980479 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980748 kubelet[2635]: I0702 00:20:43.980591 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980748 kubelet[2635]: I0702 00:20:43.980629 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28fa530f20f203150765d73f0db4d90a-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-9-82cbb2c548\" (UID: \"28fa530f20f203150765d73f0db4d90a\") " pod="kube-system/kube-scheduler-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980748 kubelet[2635]: I0702 00:20:43.980659 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980923 kubelet[2635]: I0702 00:20:43.980757 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980923 kubelet[2635]: I0702 00:20:43.980802 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980923 kubelet[2635]: I0702 00:20:43.980836 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.980923 kubelet[2635]: I0702 00:20:43.980888 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4c6aadd727932f43ca598ff0a744301-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-9-82cbb2c548\" (UID: \"e4c6aadd727932f43ca598ff0a744301\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:43.981098 kubelet[2635]: I0702 00:20:43.980929 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89593588025c5fd3b774caad25c9c509-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-9-82cbb2c548\" (UID: \"89593588025c5fd3b774caad25c9c509\") " pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" Jul 2 00:20:44.188796 kubelet[2635]: E0702 00:20:44.188693 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.203442 kubelet[2635]: E0702 00:20:44.202512 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.203442 kubelet[2635]: E0702 00:20:44.202912 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.276296 kubelet[2635]: I0702 00:20:44.275579 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-9-82cbb2c548" podStartSLOduration=3.2754959 podStartE2EDuration="3.2754959s" podCreationTimestamp="2024-07-02 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:44.271926703 +0000 UTC m=+1.627735378" watchObservedRunningTime="2024-07-02 00:20:44.2754959 +0000 UTC m=+1.631304564" Jul 2 00:20:44.306665 kubelet[2635]: E0702 00:20:44.299889 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.306665 kubelet[2635]: E0702 00:20:44.301780 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.306665 kubelet[2635]: E0702 00:20:44.302951 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:44.387857 kubelet[2635]: I0702 00:20:44.387802 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-9-82cbb2c548" podStartSLOduration=3.387722054 podStartE2EDuration="3.387722054s" podCreationTimestamp="2024-07-02 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:44.335356427 +0000 UTC m=+1.691165089" watchObservedRunningTime="2024-07-02 00:20:44.387722054 +0000 UTC m=+1.743530717" Jul 2 00:20:44.389321 kubelet[2635]: I0702 00:20:44.389268 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-9-82cbb2c548" podStartSLOduration=5.389192108 podStartE2EDuration="5.389192108s" podCreationTimestamp="2024-07-02 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:44.389135741 +0000 UTC m=+1.744944402" watchObservedRunningTime="2024-07-02 00:20:44.389192108 +0000 UTC m=+1.745000771" Jul 2 00:20:45.019634 sudo[2650]: pam_unix(sudo:session): session closed for user root Jul 2 00:20:45.114099 sshd[2672]: Received disconnect from 43.156.68.109 port 37300:11: Bye Bye [preauth] Jul 2 00:20:45.114099 sshd[2672]: Disconnected from authenticating user root 43.156.68.109 port 37300 [preauth] Jul 2 00:20:45.120097 systemd[1]: sshd@19-64.23.228.240:22-43.156.68.109:37300.service: Deactivated successfully. Jul 2 00:20:45.311162 kubelet[2635]: E0702 00:20:45.308838 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:45.955079 kubelet[2635]: E0702 00:20:45.955010 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:46.305735 kubelet[2635]: E0702 00:20:46.305444 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:46.319041 kubelet[2635]: E0702 00:20:46.307528 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:47.310700 kubelet[2635]: E0702 00:20:47.310657 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:47.620614 systemd[1]: Started sshd@20-64.23.228.240:22-43.153.223.232:41514.service - OpenSSH per-connection server daemon (43.153.223.232:41514). Jul 2 00:20:48.029815 sudo[1665]: pam_unix(sudo:session): session closed for user root Jul 2 00:20:48.050930 sshd[1662]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:48.058957 systemd[1]: sshd@9-64.23.228.240:22-147.75.109.163:40054.service: Deactivated successfully. Jul 2 00:20:48.063087 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:20:48.064121 systemd[1]: session-7.scope: Consumed 7.877s CPU time, 136.6M memory peak, 0B memory swap peak. Jul 2 00:20:48.066874 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:20:48.069524 systemd-logind[1445]: Removed session 7. Jul 2 00:20:48.838842 sshd[2700]: Invalid user user from 43.153.223.232 port 41514 Jul 2 00:20:48.962227 kubelet[2635]: E0702 00:20:48.962159 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:49.067970 sshd[2700]: Received disconnect from 43.153.223.232 port 41514:11: Bye Bye [preauth] Jul 2 00:20:49.067970 sshd[2700]: Disconnected from invalid user user 43.153.223.232 port 41514 [preauth] Jul 2 00:20:49.071852 systemd[1]: sshd@20-64.23.228.240:22-43.153.223.232:41514.service: Deactivated successfully. Jul 2 00:20:49.331318 kubelet[2635]: E0702 00:20:49.330376 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:55.990891 kubelet[2635]: I0702 00:20:55.990822 2635 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:20:55.991383 containerd[1470]: time="2024-07-02T00:20:55.991189001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:20:55.993359 kubelet[2635]: I0702 00:20:55.992033 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:20:56.023291 kubelet[2635]: I0702 00:20:56.022488 2635 topology_manager.go:215] "Topology Admit Handler" podUID="4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb" podNamespace="kube-system" podName="kube-proxy-9bfjr" Jul 2 00:20:56.040065 systemd[1]: Created slice kubepods-besteffort-pod4120b4cc_f6fd_4329_8b4b_9e8f5211f1fb.slice - libcontainer container kubepods-besteffort-pod4120b4cc_f6fd_4329_8b4b_9e8f5211f1fb.slice. Jul 2 00:20:56.056901 kubelet[2635]: I0702 00:20:56.056837 2635 topology_manager.go:215] "Topology Admit Handler" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" podNamespace="kube-system" podName="cilium-g269h" Jul 2 00:20:56.077808 systemd[1]: Created slice kubepods-burstable-podc5662d66_0c07_4ace_a464_ea82897a6149.slice - libcontainer container kubepods-burstable-podc5662d66_0c07_4ace_a464_ea82897a6149.slice. Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132637 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-bpf-maps\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132701 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-etc-cni-netd\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132728 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-kernel\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132751 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-run\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132771 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-xtables-lock\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133256 kubelet[2635]: I0702 00:20:56.132790 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-net\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133818 kubelet[2635]: I0702 00:20:56.132815 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-cgroup\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133818 kubelet[2635]: I0702 00:20:56.132840 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-config-path\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133818 kubelet[2635]: I0702 00:20:56.132872 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb-kube-proxy\") pod \"kube-proxy-9bfjr\" (UID: \"4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb\") " pod="kube-system/kube-proxy-9bfjr" Jul 2 00:20:56.133818 kubelet[2635]: I0702 00:20:56.132901 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5662d66-0c07-4ace-a464-ea82897a6149-clustermesh-secrets\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.133818 kubelet[2635]: I0702 00:20:56.132928 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjxd\" (UniqueName: \"kubernetes.io/projected/4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb-kube-api-access-7gjxd\") pod \"kube-proxy-9bfjr\" (UID: \"4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb\") " pod="kube-system/kube-proxy-9bfjr" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.132949 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-hubble-tls\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.132968 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb-xtables-lock\") pod \"kube-proxy-9bfjr\" (UID: \"4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb\") " pod="kube-system/kube-proxy-9bfjr" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.132990 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb-lib-modules\") pod \"kube-proxy-9bfjr\" (UID: \"4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb\") " pod="kube-system/kube-proxy-9bfjr" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.133015 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-hostproc\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.133041 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cni-path\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.134038 kubelet[2635]: I0702 00:20:56.133067 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-lib-modules\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.134324 kubelet[2635]: I0702 00:20:56.133101 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpldd\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-kube-api-access-fpldd\") pod \"cilium-g269h\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " pod="kube-system/cilium-g269h" Jul 2 00:20:56.192336 kubelet[2635]: I0702 00:20:56.191339 2635 topology_manager.go:215] "Topology Admit Handler" podUID="e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" podNamespace="kube-system" podName="cilium-operator-5cc964979-v9vhv" Jul 2 00:20:56.207397 systemd[1]: Created slice kubepods-besteffort-pode9881cce_6c64_4e1d_85e6_b0cbdad5e8ea.slice - libcontainer container kubepods-besteffort-pode9881cce_6c64_4e1d_85e6_b0cbdad5e8ea.slice. Jul 2 00:20:56.334705 kubelet[2635]: I0702 00:20:56.334366 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-cilium-config-path\") pod \"cilium-operator-5cc964979-v9vhv\" (UID: \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\") " pod="kube-system/cilium-operator-5cc964979-v9vhv" Jul 2 00:20:56.334705 kubelet[2635]: I0702 00:20:56.334443 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bmk\" (UniqueName: \"kubernetes.io/projected/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-kube-api-access-l5bmk\") pod \"cilium-operator-5cc964979-v9vhv\" (UID: \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\") " pod="kube-system/cilium-operator-5cc964979-v9vhv" Jul 2 00:20:56.353794 kubelet[2635]: E0702 00:20:56.353744 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.358199 containerd[1470]: time="2024-07-02T00:20:56.357891978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bfjr,Uid:4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:56.386326 kubelet[2635]: E0702 00:20:56.385786 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.387819 containerd[1470]: time="2024-07-02T00:20:56.387133410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g269h,Uid:c5662d66-0c07-4ace-a464-ea82897a6149,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:56.416193 containerd[1470]: time="2024-07-02T00:20:56.415301213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:56.416193 containerd[1470]: time="2024-07-02T00:20:56.415706067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.416193 containerd[1470]: time="2024-07-02T00:20:56.415768191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:56.416193 containerd[1470]: time="2024-07-02T00:20:56.415793801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.451857 containerd[1470]: time="2024-07-02T00:20:56.450740290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:56.451857 containerd[1470]: time="2024-07-02T00:20:56.450838980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.451857 containerd[1470]: time="2024-07-02T00:20:56.450867793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:56.451857 containerd[1470]: time="2024-07-02T00:20:56.450882497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.483785 systemd[1]: Started cri-containerd-8dca66240f6ad6f68fe479e60459b45ff14202c915b6350d7e87613c5651644b.scope - libcontainer container 8dca66240f6ad6f68fe479e60459b45ff14202c915b6350d7e87613c5651644b. Jul 2 00:20:56.505827 systemd[1]: Started cri-containerd-5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274.scope - libcontainer container 5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274. Jul 2 00:20:56.516958 kubelet[2635]: E0702 00:20:56.516353 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.517733 containerd[1470]: time="2024-07-02T00:20:56.517669304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v9vhv,Uid:e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:56.579766 containerd[1470]: time="2024-07-02T00:20:56.579116072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bfjr,Uid:4120b4cc-f6fd-4329-8b4b-9e8f5211f1fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dca66240f6ad6f68fe479e60459b45ff14202c915b6350d7e87613c5651644b\"" Jul 2 00:20:56.582458 kubelet[2635]: E0702 00:20:56.582163 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.596373 containerd[1470]: time="2024-07-02T00:20:56.595846670Z" level=info msg="CreateContainer within sandbox \"8dca66240f6ad6f68fe479e60459b45ff14202c915b6350d7e87613c5651644b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:20:56.613072 containerd[1470]: time="2024-07-02T00:20:56.612726978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g269h,Uid:c5662d66-0c07-4ace-a464-ea82897a6149,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\"" Jul 2 00:20:56.615964 kubelet[2635]: E0702 00:20:56.615259 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.618677 containerd[1470]: time="2024-07-02T00:20:56.618455921Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:20:56.646892 containerd[1470]: time="2024-07-02T00:20:56.646186559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:56.647264 containerd[1470]: time="2024-07-02T00:20:56.647186794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.647547 containerd[1470]: time="2024-07-02T00:20:56.647496873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:56.647725 containerd[1470]: time="2024-07-02T00:20:56.647687222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:56.661083 containerd[1470]: time="2024-07-02T00:20:56.661005843Z" level=info msg="CreateContainer within sandbox \"8dca66240f6ad6f68fe479e60459b45ff14202c915b6350d7e87613c5651644b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fca42d16f55c85b4ed79041348bd12dd14cb9f6fa8b4c6e21c8e2b1bc448d6c\"" Jul 2 00:20:56.664461 containerd[1470]: time="2024-07-02T00:20:56.662891500Z" level=info msg="StartContainer for \"8fca42d16f55c85b4ed79041348bd12dd14cb9f6fa8b4c6e21c8e2b1bc448d6c\"" Jul 2 00:20:56.685991 systemd[1]: Started cri-containerd-71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386.scope - libcontainer container 71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386. Jul 2 00:20:56.733790 systemd[1]: Started cri-containerd-8fca42d16f55c85b4ed79041348bd12dd14cb9f6fa8b4c6e21c8e2b1bc448d6c.scope - libcontainer container 8fca42d16f55c85b4ed79041348bd12dd14cb9f6fa8b4c6e21c8e2b1bc448d6c. Jul 2 00:20:56.792055 containerd[1470]: time="2024-07-02T00:20:56.791991610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v9vhv,Uid:e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\"" Jul 2 00:20:56.796095 kubelet[2635]: E0702 00:20:56.796048 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:56.807721 containerd[1470]: time="2024-07-02T00:20:56.807532324Z" level=info msg="StartContainer for \"8fca42d16f55c85b4ed79041348bd12dd14cb9f6fa8b4c6e21c8e2b1bc448d6c\" returns successfully" Jul 2 00:20:57.353174 kubelet[2635]: E0702 00:20:57.353109 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:20:57.378000 kubelet[2635]: I0702 00:20:57.377539 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9bfjr" podStartSLOduration=1.377469346 podStartE2EDuration="1.377469346s" podCreationTimestamp="2024-07-02 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:57.377267924 +0000 UTC m=+14.733076585" watchObservedRunningTime="2024-07-02 00:20:57.377469346 +0000 UTC m=+14.733278005" Jul 2 00:21:03.609027 systemd[1]: Started sshd@21-64.23.228.240:22-43.134.0.65:49554.service - OpenSSH per-connection server daemon (43.134.0.65:49554). Jul 2 00:21:04.568974 systemd[1]: Started sshd@22-64.23.228.240:22-43.163.214.38:51836.service - OpenSSH per-connection server daemon (43.163.214.38:51836). Jul 2 00:21:05.120241 sshd[3009]: Received disconnect from 43.134.0.65 port 49554:11: Bye Bye [preauth] Jul 2 00:21:05.120241 sshd[3009]: Disconnected from authenticating user root 43.134.0.65 port 49554 [preauth] Jul 2 00:21:05.125003 systemd[1]: sshd@21-64.23.228.240:22-43.134.0.65:49554.service: Deactivated successfully. Jul 2 00:21:05.261960 sshd[3013]: Invalid user ftpadmin from 43.163.214.38 port 51836 Jul 2 00:21:05.419514 sshd[3013]: Received disconnect from 43.163.214.38 port 51836:11: Bye Bye [preauth] Jul 2 00:21:05.419514 sshd[3013]: Disconnected from invalid user ftpadmin 43.163.214.38 port 51836 [preauth] Jul 2 00:21:05.422249 systemd[1]: sshd@22-64.23.228.240:22-43.163.214.38:51836.service: Deactivated successfully. Jul 2 00:21:06.534402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844267543.mount: Deactivated successfully. Jul 2 00:21:11.981638 systemd[1]: Started sshd@23-64.23.228.240:22-107.175.206.68:52062.service - OpenSSH per-connection server daemon (107.175.206.68:52062). Jul 2 00:21:12.582781 sshd[3040]: Invalid user frappe from 107.175.206.68 port 52062 Jul 2 00:21:12.670228 sshd[3040]: Received disconnect from 107.175.206.68 port 52062:11: Bye Bye [preauth] Jul 2 00:21:12.670228 sshd[3040]: Disconnected from invalid user frappe 107.175.206.68 port 52062 [preauth] Jul 2 00:21:12.675456 systemd[1]: sshd@23-64.23.228.240:22-107.175.206.68:52062.service: Deactivated successfully. Jul 2 00:21:13.240501 containerd[1470]: time="2024-07-02T00:21:13.240387728Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:13.243929 containerd[1470]: time="2024-07-02T00:21:13.242441901Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Jul 2 00:21:13.266397 containerd[1470]: time="2024-07-02T00:21:13.266316953Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:13.270715 containerd[1470]: time="2024-07-02T00:21:13.270645668Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.651950647s" Jul 2 00:21:13.271001 containerd[1470]: time="2024-07-02T00:21:13.270974054Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:21:13.274132 containerd[1470]: time="2024-07-02T00:21:13.273159934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:21:13.277180 containerd[1470]: time="2024-07-02T00:21:13.276368576Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:21:13.437710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272972385.mount: Deactivated successfully. Jul 2 00:21:13.449880 containerd[1470]: time="2024-07-02T00:21:13.449693388Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\"" Jul 2 00:21:13.452616 containerd[1470]: time="2024-07-02T00:21:13.451694475Z" level=info msg="StartContainer for \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\"" Jul 2 00:21:13.638921 systemd[1]: run-containerd-runc-k8s.io-24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87-runc.FXZuf7.mount: Deactivated successfully. Jul 2 00:21:13.671886 systemd[1]: Started cri-containerd-24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87.scope - libcontainer container 24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87. Jul 2 00:21:13.785471 containerd[1470]: time="2024-07-02T00:21:13.784875855Z" level=info msg="StartContainer for \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\" returns successfully" Jul 2 00:21:13.813353 systemd[1]: cri-containerd-24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87.scope: Deactivated successfully. Jul 2 00:21:14.003012 containerd[1470]: time="2024-07-02T00:21:13.945960937Z" level=info msg="shim disconnected" id=24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87 namespace=k8s.io Jul 2 00:21:14.003012 containerd[1470]: time="2024-07-02T00:21:14.002794673Z" level=warning msg="cleaning up after shim disconnected" id=24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87 namespace=k8s.io Jul 2 00:21:14.003012 containerd[1470]: time="2024-07-02T00:21:14.002819056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:14.427826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87-rootfs.mount: Deactivated successfully. Jul 2 00:21:14.537735 kubelet[2635]: E0702 00:21:14.535941 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:14.548629 containerd[1470]: time="2024-07-02T00:21:14.548563586Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:21:14.610252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886716193.mount: Deactivated successfully. Jul 2 00:21:14.655026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506430227.mount: Deactivated successfully. Jul 2 00:21:14.675802 containerd[1470]: time="2024-07-02T00:21:14.675725927Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\"" Jul 2 00:21:14.684628 containerd[1470]: time="2024-07-02T00:21:14.683793921Z" level=info msg="StartContainer for \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\"" Jul 2 00:21:14.750831 systemd[1]: Started cri-containerd-c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe.scope - libcontainer container c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe. Jul 2 00:21:14.789888 systemd[1]: Started sshd@24-64.23.228.240:22-103.82.240.189:46736.service - OpenSSH per-connection server daemon (103.82.240.189:46736). Jul 2 00:21:14.885055 containerd[1470]: time="2024-07-02T00:21:14.884963682Z" level=info msg="StartContainer for \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\" returns successfully" Jul 2 00:21:14.896850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:14.897272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:14.897367 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:14.905972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:14.906616 systemd[1]: cri-containerd-c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe.scope: Deactivated successfully. Jul 2 00:21:14.966889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:14.987538 containerd[1470]: time="2024-07-02T00:21:14.987438300Z" level=info msg="shim disconnected" id=c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe namespace=k8s.io Jul 2 00:21:14.988485 containerd[1470]: time="2024-07-02T00:21:14.988307650Z" level=warning msg="cleaning up after shim disconnected" id=c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe namespace=k8s.io Jul 2 00:21:14.988485 containerd[1470]: time="2024-07-02T00:21:14.988356973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:15.545781 kubelet[2635]: E0702 00:21:15.545375 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:15.560604 containerd[1470]: time="2024-07-02T00:21:15.560445827Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:21:15.639288 containerd[1470]: time="2024-07-02T00:21:15.638868473Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\"" Jul 2 00:21:15.640590 containerd[1470]: time="2024-07-02T00:21:15.640165284Z" level=info msg="StartContainer for \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\"" Jul 2 00:21:15.724971 systemd[1]: Started cri-containerd-3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae.scope - libcontainer container 3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae. Jul 2 00:21:15.801114 systemd[1]: cri-containerd-3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae.scope: Deactivated successfully. Jul 2 00:21:15.803505 containerd[1470]: time="2024-07-02T00:21:15.802868033Z" level=info msg="StartContainer for \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\" returns successfully" Jul 2 00:21:15.883587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae-rootfs.mount: Deactivated successfully. Jul 2 00:21:16.002869 containerd[1470]: time="2024-07-02T00:21:16.001622664Z" level=info msg="shim disconnected" id=3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae namespace=k8s.io Jul 2 00:21:16.002869 containerd[1470]: time="2024-07-02T00:21:16.001729462Z" level=warning msg="cleaning up after shim disconnected" id=3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae namespace=k8s.io Jul 2 00:21:16.002869 containerd[1470]: time="2024-07-02T00:21:16.001741294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:16.099725 containerd[1470]: time="2024-07-02T00:21:16.097931338Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:16.099947 containerd[1470]: time="2024-07-02T00:21:16.099893896Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907253" Jul 2 00:21:16.100395 sshd[3137]: Received disconnect from 103.82.240.189 port 46736:11: Bye Bye [preauth] Jul 2 00:21:16.100395 sshd[3137]: Disconnected from authenticating user root 103.82.240.189 port 46736 [preauth] Jul 2 00:21:16.102638 containerd[1470]: time="2024-07-02T00:21:16.102300746Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:16.103668 systemd[1]: sshd@24-64.23.228.240:22-103.82.240.189:46736.service: Deactivated successfully. Jul 2 00:21:16.116934 containerd[1470]: time="2024-07-02T00:21:16.116404028Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.843173477s" Jul 2 00:21:16.116934 containerd[1470]: time="2024-07-02T00:21:16.116530485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:21:16.122645 containerd[1470]: time="2024-07-02T00:21:16.122042977Z" level=info msg="CreateContainer within sandbox \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:21:16.152196 containerd[1470]: time="2024-07-02T00:21:16.152019448Z" level=info msg="CreateContainer within sandbox \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\"" Jul 2 00:21:16.154480 containerd[1470]: time="2024-07-02T00:21:16.152741664Z" level=info msg="StartContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\"" Jul 2 00:21:16.205846 systemd[1]: Started cri-containerd-4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80.scope - libcontainer container 4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80. Jul 2 00:21:16.260302 containerd[1470]: time="2024-07-02T00:21:16.260221786Z" level=info msg="StartContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" returns successfully" Jul 2 00:21:16.563349 kubelet[2635]: E0702 00:21:16.563292 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:16.573510 kubelet[2635]: E0702 00:21:16.572912 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:16.578299 containerd[1470]: time="2024-07-02T00:21:16.578230481Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:21:16.619845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224352429.mount: Deactivated successfully. Jul 2 00:21:16.622450 containerd[1470]: time="2024-07-02T00:21:16.621907555Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\"" Jul 2 00:21:16.623527 containerd[1470]: time="2024-07-02T00:21:16.623480258Z" level=info msg="StartContainer for \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\"" Jul 2 00:21:16.633572 kubelet[2635]: I0702 00:21:16.633499 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-v9vhv" podStartSLOduration=1.315807016 podStartE2EDuration="20.633450851s" podCreationTimestamp="2024-07-02 00:20:56 +0000 UTC" firstStartedPulling="2024-07-02 00:20:56.799713947 +0000 UTC m=+14.155522597" lastFinishedPulling="2024-07-02 00:21:16.117357792 +0000 UTC m=+33.473166432" observedRunningTime="2024-07-02 00:21:16.63313075 +0000 UTC m=+33.988939411" watchObservedRunningTime="2024-07-02 00:21:16.633450851 +0000 UTC m=+33.989259511" Jul 2 00:21:16.715783 systemd[1]: Started cri-containerd-a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff.scope - libcontainer container a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff. Jul 2 00:21:16.823373 systemd[1]: cri-containerd-a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff.scope: Deactivated successfully. Jul 2 00:21:16.829870 containerd[1470]: time="2024-07-02T00:21:16.826350492Z" level=info msg="StartContainer for \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\" returns successfully" Jul 2 00:21:16.896268 containerd[1470]: time="2024-07-02T00:21:16.896177676Z" level=info msg="shim disconnected" id=a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff namespace=k8s.io Jul 2 00:21:16.896268 containerd[1470]: time="2024-07-02T00:21:16.896251676Z" level=warning msg="cleaning up after shim disconnected" id=a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff namespace=k8s.io Jul 2 00:21:16.896268 containerd[1470]: time="2024-07-02T00:21:16.896263286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:21:16.966633 containerd[1470]: time="2024-07-02T00:21:16.966560445Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:21:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:21:17.436163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff-rootfs.mount: Deactivated successfully. Jul 2 00:21:17.583825 kubelet[2635]: E0702 00:21:17.583746 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:17.585522 kubelet[2635]: E0702 00:21:17.584327 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:17.592766 containerd[1470]: time="2024-07-02T00:21:17.592697053Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:21:17.620263 containerd[1470]: time="2024-07-02T00:21:17.620195943Z" level=info msg="CreateContainer within sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\"" Jul 2 00:21:17.620949 containerd[1470]: time="2024-07-02T00:21:17.620900148Z" level=info msg="StartContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\"" Jul 2 00:21:17.720723 systemd[1]: Started cri-containerd-6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5.scope - libcontainer container 6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5. Jul 2 00:21:17.808068 containerd[1470]: time="2024-07-02T00:21:17.807900429Z" level=info msg="StartContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" returns successfully" Jul 2 00:21:18.101820 kubelet[2635]: I0702 00:21:18.101630 2635 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:21:18.153401 kubelet[2635]: I0702 00:21:18.152385 2635 topology_manager.go:215] "Topology Admit Handler" podUID="128c3641-b15f-4d73-8b55-9241741d4fbf" podNamespace="kube-system" podName="coredns-76f75df574-bkjmz" Jul 2 00:21:18.166119 kubelet[2635]: I0702 00:21:18.166062 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwch\" (UniqueName: \"kubernetes.io/projected/128c3641-b15f-4d73-8b55-9241741d4fbf-kube-api-access-qgwch\") pod \"coredns-76f75df574-bkjmz\" (UID: \"128c3641-b15f-4d73-8b55-9241741d4fbf\") " pod="kube-system/coredns-76f75df574-bkjmz" Jul 2 00:21:18.166119 kubelet[2635]: I0702 00:21:18.166129 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/128c3641-b15f-4d73-8b55-9241741d4fbf-config-volume\") pod \"coredns-76f75df574-bkjmz\" (UID: \"128c3641-b15f-4d73-8b55-9241741d4fbf\") " pod="kube-system/coredns-76f75df574-bkjmz" Jul 2 00:21:18.172521 kubelet[2635]: I0702 00:21:18.171290 2635 topology_manager.go:215] "Topology Admit Handler" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" podNamespace="kube-system" podName="coredns-76f75df574-fsw6x" Jul 2 00:21:18.174492 systemd[1]: Created slice kubepods-burstable-pod128c3641_b15f_4d73_8b55_9241741d4fbf.slice - libcontainer container kubepods-burstable-pod128c3641_b15f_4d73_8b55_9241741d4fbf.slice. Jul 2 00:21:18.196381 systemd[1]: Created slice kubepods-burstable-pode5439f5a_d36f_4f2c_8340_872248bf73c4.slice - libcontainer container kubepods-burstable-pode5439f5a_d36f_4f2c_8340_872248bf73c4.slice. Jul 2 00:21:18.266872 kubelet[2635]: I0702 00:21:18.266741 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctcxx\" (UniqueName: \"kubernetes.io/projected/e5439f5a-d36f-4f2c-8340-872248bf73c4-kube-api-access-ctcxx\") pod \"coredns-76f75df574-fsw6x\" (UID: \"e5439f5a-d36f-4f2c-8340-872248bf73c4\") " pod="kube-system/coredns-76f75df574-fsw6x" Jul 2 00:21:18.266872 kubelet[2635]: I0702 00:21:18.266873 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5439f5a-d36f-4f2c-8340-872248bf73c4-config-volume\") pod \"coredns-76f75df574-fsw6x\" (UID: \"e5439f5a-d36f-4f2c-8340-872248bf73c4\") " pod="kube-system/coredns-76f75df574-fsw6x" Jul 2 00:21:18.492446 kubelet[2635]: E0702 00:21:18.492381 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:18.500395 containerd[1470]: time="2024-07-02T00:21:18.498966126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bkjmz,Uid:128c3641-b15f-4d73-8b55-9241741d4fbf,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:18.510539 kubelet[2635]: E0702 00:21:18.508124 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:18.514849 containerd[1470]: time="2024-07-02T00:21:18.514756969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsw6x,Uid:e5439f5a-d36f-4f2c-8340-872248bf73c4,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:18.672530 kubelet[2635]: E0702 00:21:18.672282 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:19.656934 kubelet[2635]: E0702 00:21:19.656876 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:20.654227 kubelet[2635]: E0702 00:21:20.654185 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:21.028276 systemd-networkd[1368]: cilium_host: Link UP Jul 2 00:21:21.028827 systemd-networkd[1368]: cilium_net: Link UP Jul 2 00:21:21.029336 systemd-networkd[1368]: cilium_net: Gained carrier Jul 2 00:21:21.029751 systemd-networkd[1368]: cilium_host: Gained carrier Jul 2 00:21:21.256316 systemd-networkd[1368]: cilium_vxlan: Link UP Jul 2 00:21:21.256325 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jul 2 00:21:21.309820 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jul 2 00:21:21.657033 kubelet[2635]: E0702 00:21:21.656975 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:21.789660 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jul 2 00:21:21.892374 kernel: NET: Registered PF_ALG protocol family Jul 2 00:21:22.814599 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jul 2 00:21:23.090869 systemd[1]: Started sshd@25-64.23.228.240:22-112.6.122.181:33232.service - OpenSSH per-connection server daemon (112.6.122.181:33232). Jul 2 00:21:23.156959 systemd[1]: Started sshd@26-64.23.228.240:22-43.156.152.211:39622.service - OpenSSH per-connection server daemon (43.156.152.211:39622). Jul 2 00:21:23.194881 systemd-networkd[1368]: lxc_health: Link UP Jul 2 00:21:23.206674 systemd-networkd[1368]: lxc_health: Gained carrier Jul 2 00:21:23.691579 systemd-networkd[1368]: lxc2940ecaf0da5: Link UP Jul 2 00:21:23.701276 systemd-networkd[1368]: lxca956d5c921c6: Link UP Jul 2 00:21:23.705471 kernel: eth0: renamed from tmp6eed3 Jul 2 00:21:23.709474 kernel: eth0: renamed from tmp0e8b7 Jul 2 00:21:23.720680 systemd-networkd[1368]: lxc2940ecaf0da5: Gained carrier Jul 2 00:21:23.725196 systemd-networkd[1368]: lxca956d5c921c6: Gained carrier Jul 2 00:21:24.393774 kubelet[2635]: E0702 00:21:24.393391 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:24.422548 kubelet[2635]: I0702 00:21:24.422490 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-g269h" podStartSLOduration=11.767693263 podStartE2EDuration="28.422411724s" podCreationTimestamp="2024-07-02 00:20:56 +0000 UTC" firstStartedPulling="2024-07-02 00:20:56.616966269 +0000 UTC m=+13.972774910" lastFinishedPulling="2024-07-02 00:21:13.271684725 +0000 UTC m=+30.627493371" observedRunningTime="2024-07-02 00:21:18.911376254 +0000 UTC m=+36.267184914" watchObservedRunningTime="2024-07-02 00:21:24.422411724 +0000 UTC m=+41.778220381" Jul 2 00:21:24.498322 sshd[3816]: Received disconnect from 112.6.122.181 port 33232:11: Bye Bye [preauth] Jul 2 00:21:24.498322 sshd[3816]: Disconnected from authenticating user root 112.6.122.181 port 33232 [preauth] Jul 2 00:21:24.506360 systemd[1]: sshd@25-64.23.228.240:22-112.6.122.181:33232.service: Deactivated successfully. Jul 2 00:21:24.666668 kubelet[2635]: E0702 00:21:24.666229 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:24.707027 sshd[3823]: Received disconnect from 43.156.152.211 port 39622:11: Bye Bye [preauth] Jul 2 00:21:24.707027 sshd[3823]: Disconnected from authenticating user root 43.156.152.211 port 39622 [preauth] Jul 2 00:21:24.709153 systemd[1]: sshd@26-64.23.228.240:22-43.156.152.211:39622.service: Deactivated successfully. Jul 2 00:21:24.928585 systemd-networkd[1368]: lxc2940ecaf0da5: Gained IPv6LL Jul 2 00:21:25.182675 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 2 00:21:25.438655 systemd-networkd[1368]: lxca956d5c921c6: Gained IPv6LL Jul 2 00:21:29.677903 systemd[1]: Started sshd@27-64.23.228.240:22-43.134.124.145:55126.service - OpenSSH per-connection server daemon (43.134.124.145:55126). Jul 2 00:21:31.129161 containerd[1470]: time="2024-07-02T00:21:31.128695375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:31.132929 containerd[1470]: time="2024-07-02T00:21:31.130657042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.133214 containerd[1470]: time="2024-07-02T00:21:31.133103557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:31.133481 containerd[1470]: time="2024-07-02T00:21:31.133383395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.169247 containerd[1470]: time="2024-07-02T00:21:31.168326094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:31.170675 containerd[1470]: time="2024-07-02T00:21:31.170542850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.170675 containerd[1470]: time="2024-07-02T00:21:31.170608915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:31.170675 containerd[1470]: time="2024-07-02T00:21:31.170633647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.254189 systemd[1]: Started cri-containerd-6eed3561946f48d53ab0d603f4d2e1ea89e796cb2e479f3391088a6e3d1c49c0.scope - libcontainer container 6eed3561946f48d53ab0d603f4d2e1ea89e796cb2e479f3391088a6e3d1c49c0. Jul 2 00:21:31.283763 systemd[1]: Started cri-containerd-0e8b7ec6f8c390ace08e1cc64ad130dc849ad03f1055f5badd711c2265ea33f8.scope - libcontainer container 0e8b7ec6f8c390ace08e1cc64ad130dc849ad03f1055f5badd711c2265ea33f8. Jul 2 00:21:31.297368 sshd[3878]: Received disconnect from 43.134.124.145 port 55126:11: Bye Bye [preauth] Jul 2 00:21:31.297368 sshd[3878]: Disconnected from authenticating user root 43.134.124.145 port 55126 [preauth] Jul 2 00:21:31.307056 systemd[1]: sshd@27-64.23.228.240:22-43.134.124.145:55126.service: Deactivated successfully. Jul 2 00:21:31.444951 containerd[1470]: time="2024-07-02T00:21:31.444884602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bkjmz,Uid:128c3641-b15f-4d73-8b55-9241741d4fbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e8b7ec6f8c390ace08e1cc64ad130dc849ad03f1055f5badd711c2265ea33f8\"" Jul 2 00:21:31.447918 kubelet[2635]: E0702 00:21:31.447870 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:31.462554 containerd[1470]: time="2024-07-02T00:21:31.461455437Z" level=info msg="CreateContainer within sandbox \"0e8b7ec6f8c390ace08e1cc64ad130dc849ad03f1055f5badd711c2265ea33f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:21:31.525460 containerd[1470]: time="2024-07-02T00:21:31.518776033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fsw6x,Uid:e5439f5a-d36f-4f2c-8340-872248bf73c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eed3561946f48d53ab0d603f4d2e1ea89e796cb2e479f3391088a6e3d1c49c0\"" Jul 2 00:21:31.528189 kubelet[2635]: E0702 00:21:31.527780 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:31.554642 containerd[1470]: time="2024-07-02T00:21:31.554512773Z" level=info msg="CreateContainer within sandbox \"6eed3561946f48d53ab0d603f4d2e1ea89e796cb2e479f3391088a6e3d1c49c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:21:31.566857 containerd[1470]: time="2024-07-02T00:21:31.566470726Z" level=info msg="CreateContainer within sandbox \"0e8b7ec6f8c390ace08e1cc64ad130dc849ad03f1055f5badd711c2265ea33f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67c99b037ddcc4e785e799617601d1cea16122e7c827b41cfb90a0b544e6f595\"" Jul 2 00:21:31.570260 containerd[1470]: time="2024-07-02T00:21:31.568764506Z" level=info msg="StartContainer for \"67c99b037ddcc4e785e799617601d1cea16122e7c827b41cfb90a0b544e6f595\"" Jul 2 00:21:31.593331 containerd[1470]: time="2024-07-02T00:21:31.593256611Z" level=info msg="CreateContainer within sandbox \"6eed3561946f48d53ab0d603f4d2e1ea89e796cb2e479f3391088a6e3d1c49c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df26827760f9cc6bd2b5847b2cd17bf87b518ae2084ff56a8abc6b8d6cf96be6\"" Jul 2 00:21:31.603574 containerd[1470]: time="2024-07-02T00:21:31.596557788Z" level=info msg="StartContainer for \"df26827760f9cc6bd2b5847b2cd17bf87b518ae2084ff56a8abc6b8d6cf96be6\"" Jul 2 00:21:31.663032 systemd[1]: Started cri-containerd-67c99b037ddcc4e785e799617601d1cea16122e7c827b41cfb90a0b544e6f595.scope - libcontainer container 67c99b037ddcc4e785e799617601d1cea16122e7c827b41cfb90a0b544e6f595. Jul 2 00:21:31.708512 systemd[1]: Started cri-containerd-df26827760f9cc6bd2b5847b2cd17bf87b518ae2084ff56a8abc6b8d6cf96be6.scope - libcontainer container df26827760f9cc6bd2b5847b2cd17bf87b518ae2084ff56a8abc6b8d6cf96be6. Jul 2 00:21:31.816323 containerd[1470]: time="2024-07-02T00:21:31.816211837Z" level=info msg="StartContainer for \"67c99b037ddcc4e785e799617601d1cea16122e7c827b41cfb90a0b544e6f595\" returns successfully" Jul 2 00:21:31.837262 containerd[1470]: time="2024-07-02T00:21:31.836279649Z" level=info msg="StartContainer for \"df26827760f9cc6bd2b5847b2cd17bf87b518ae2084ff56a8abc6b8d6cf96be6\" returns successfully" Jul 2 00:21:32.151853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748028917.mount: Deactivated successfully. Jul 2 00:21:32.361640 systemd[1]: Started sshd@28-64.23.228.240:22-190.181.4.12:51590.service - OpenSSH per-connection server daemon (190.181.4.12:51590). Jul 2 00:21:32.792599 kubelet[2635]: E0702 00:21:32.787131 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:32.805859 kubelet[2635]: E0702 00:21:32.797731 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:32.890608 kubelet[2635]: I0702 00:21:32.889047 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bkjmz" podStartSLOduration=36.888982778 podStartE2EDuration="36.888982778s" podCreationTimestamp="2024-07-02 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:32.881902859 +0000 UTC m=+50.237711524" watchObservedRunningTime="2024-07-02 00:21:32.888982778 +0000 UTC m=+50.244791433" Jul 2 00:21:32.890608 kubelet[2635]: I0702 00:21:32.889237 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fsw6x" podStartSLOduration=36.889205623 podStartE2EDuration="36.889205623s" podCreationTimestamp="2024-07-02 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:32.846736607 +0000 UTC m=+50.202545271" watchObservedRunningTime="2024-07-02 00:21:32.889205623 +0000 UTC m=+50.245014287" Jul 2 00:21:33.048753 systemd[1]: Started sshd@29-64.23.228.240:22-43.156.68.109:52506.service - OpenSSH per-connection server daemon (43.156.68.109:52506). Jul 2 00:21:33.631512 sshd[4037]: Received disconnect from 190.181.4.12 port 51590:11: Bye Bye [preauth] Jul 2 00:21:33.631512 sshd[4037]: Disconnected from authenticating user root 190.181.4.12 port 51590 [preauth] Jul 2 00:21:33.634463 systemd[1]: sshd@28-64.23.228.240:22-190.181.4.12:51590.service: Deactivated successfully. Jul 2 00:21:33.809406 kubelet[2635]: E0702 00:21:33.801208 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:33.809406 kubelet[2635]: E0702 00:21:33.802108 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:34.658032 sshd[4049]: Received disconnect from 43.156.68.109 port 52506:11: Bye Bye [preauth] Jul 2 00:21:34.658032 sshd[4049]: Disconnected from authenticating user root 43.156.68.109 port 52506 [preauth] Jul 2 00:21:34.659334 systemd[1]: sshd@29-64.23.228.240:22-43.156.68.109:52506.service: Deactivated successfully. Jul 2 00:21:34.804934 kubelet[2635]: E0702 00:21:34.804852 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:34.808261 kubelet[2635]: E0702 00:21:34.808189 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:35.336714 systemd[1]: Started sshd@30-64.23.228.240:22-147.75.109.163:58808.service - OpenSSH per-connection server daemon (147.75.109.163:58808). Jul 2 00:21:35.424724 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 58808 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:35.429262 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:35.444985 systemd-logind[1445]: New session 8 of user core. Jul 2 00:21:35.458757 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:21:36.738618 sshd[4062]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:36.753271 systemd[1]: sshd@30-64.23.228.240:22-147.75.109.163:58808.service: Deactivated successfully. Jul 2 00:21:36.774996 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:21:36.777377 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:21:36.780338 systemd-logind[1445]: Removed session 8. Jul 2 00:21:39.259090 systemd[1]: Started sshd@31-64.23.228.240:22-43.153.223.232:55626.service - OpenSSH per-connection server daemon (43.153.223.232:55626). Jul 2 00:21:40.755537 sshd[4076]: Received disconnect from 43.153.223.232 port 55626:11: Bye Bye [preauth] Jul 2 00:21:40.755537 sshd[4076]: Disconnected from authenticating user root 43.153.223.232 port 55626 [preauth] Jul 2 00:21:40.759266 systemd[1]: sshd@31-64.23.228.240:22-43.153.223.232:55626.service: Deactivated successfully. Jul 2 00:21:41.764989 systemd[1]: Started sshd@32-64.23.228.240:22-147.75.109.163:58816.service - OpenSSH per-connection server daemon (147.75.109.163:58816). Jul 2 00:21:41.821891 sshd[4081]: Accepted publickey for core from 147.75.109.163 port 58816 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:41.824936 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:41.832507 systemd-logind[1445]: New session 9 of user core. Jul 2 00:21:41.836743 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:21:42.015659 sshd[4081]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:42.020863 systemd[1]: sshd@32-64.23.228.240:22-147.75.109.163:58816.service: Deactivated successfully. Jul 2 00:21:42.025911 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:21:42.029524 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:21:42.031826 systemd-logind[1445]: Removed session 9. Jul 2 00:21:47.037720 systemd[1]: Started sshd@33-64.23.228.240:22-147.75.109.163:48742.service - OpenSSH per-connection server daemon (147.75.109.163:48742). Jul 2 00:21:47.116466 sshd[4097]: Accepted publickey for core from 147.75.109.163 port 48742 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:47.119388 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:47.133546 systemd-logind[1445]: New session 10 of user core. Jul 2 00:21:47.139768 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:21:47.319798 sshd[4097]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:47.326442 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:21:47.326824 systemd[1]: sshd@33-64.23.228.240:22-147.75.109.163:48742.service: Deactivated successfully. Jul 2 00:21:47.329848 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:21:47.334078 systemd-logind[1445]: Removed session 10. Jul 2 00:21:52.356506 systemd[1]: Started sshd@34-64.23.228.240:22-147.75.109.163:48744.service - OpenSSH per-connection server daemon (147.75.109.163:48744). Jul 2 00:21:52.470050 sshd[4111]: Accepted publickey for core from 147.75.109.163 port 48744 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:52.475391 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:52.530035 systemd-logind[1445]: New session 11 of user core. Jul 2 00:21:52.537847 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:21:52.764551 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:52.787895 systemd[1]: sshd@34-64.23.228.240:22-147.75.109.163:48744.service: Deactivated successfully. Jul 2 00:21:52.793662 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:21:52.800404 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:21:52.838172 systemd[1]: Started sshd@35-64.23.228.240:22-147.75.109.163:60198.service - OpenSSH per-connection server daemon (147.75.109.163:60198). Jul 2 00:21:52.854748 systemd[1]: Started sshd@36-64.23.228.240:22-43.163.214.38:38462.service - OpenSSH per-connection server daemon (43.163.214.38:38462). Jul 2 00:21:52.876684 systemd-logind[1445]: Removed session 11. Jul 2 00:21:52.989276 sshd[4126]: Accepted publickey for core from 147.75.109.163 port 60198 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:52.997482 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:53.010137 systemd-logind[1445]: New session 12 of user core. Jul 2 00:21:53.020918 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:21:53.515360 systemd[1]: Started sshd@37-64.23.228.240:22-43.134.0.65:36618.service - OpenSSH per-connection server daemon (43.134.0.65:36618). Jul 2 00:21:53.585808 sshd[4126]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:53.604034 systemd[1]: sshd@35-64.23.228.240:22-147.75.109.163:60198.service: Deactivated successfully. Jul 2 00:21:53.610328 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:21:53.614718 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:21:53.625980 systemd[1]: Started sshd@38-64.23.228.240:22-147.75.109.163:60210.service - OpenSSH per-connection server daemon (147.75.109.163:60210). Jul 2 00:21:53.634933 systemd-logind[1445]: Removed session 12. Jul 2 00:21:53.761466 sshd[4143]: Accepted publickey for core from 147.75.109.163 port 60210 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:53.761734 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:53.784432 systemd-logind[1445]: New session 13 of user core. Jul 2 00:21:53.788075 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:21:53.922942 sshd[4127]: Received disconnect from 43.163.214.38 port 38462:11: Bye Bye [preauth] Jul 2 00:21:53.927458 sshd[4127]: Disconnected from authenticating user root 43.163.214.38 port 38462 [preauth] Jul 2 00:21:53.930876 systemd[1]: sshd@36-64.23.228.240:22-43.163.214.38:38462.service: Deactivated successfully. Jul 2 00:21:54.139885 sshd[4143]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:54.147040 systemd[1]: sshd@38-64.23.228.240:22-147.75.109.163:60210.service: Deactivated successfully. Jul 2 00:21:54.154801 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:21:54.159182 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:21:54.162650 systemd-logind[1445]: Removed session 13. Jul 2 00:21:55.072613 sshd[4138]: Received disconnect from 43.134.0.65 port 36618:11: Bye Bye [preauth] Jul 2 00:21:55.072613 sshd[4138]: Disconnected from authenticating user root 43.134.0.65 port 36618 [preauth] Jul 2 00:21:55.078239 systemd[1]: sshd@37-64.23.228.240:22-43.134.0.65:36618.service: Deactivated successfully. Jul 2 00:21:56.153019 systemd[1]: Started sshd@39-64.23.228.240:22-107.175.206.68:52110.service - OpenSSH per-connection server daemon (107.175.206.68:52110). Jul 2 00:21:56.635467 sshd[4161]: Invalid user github from 107.175.206.68 port 52110 Jul 2 00:21:56.722547 sshd[4161]: Received disconnect from 107.175.206.68 port 52110:11: Bye Bye [preauth] Jul 2 00:21:56.722547 sshd[4161]: Disconnected from invalid user github 107.175.206.68 port 52110 [preauth] Jul 2 00:21:56.724547 systemd[1]: sshd@39-64.23.228.240:22-107.175.206.68:52110.service: Deactivated successfully. Jul 2 00:21:57.152501 kubelet[2635]: E0702 00:21:57.151781 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:21:59.170951 systemd[1]: Started sshd@40-64.23.228.240:22-147.75.109.163:60218.service - OpenSSH per-connection server daemon (147.75.109.163:60218). Jul 2 00:21:59.247386 sshd[4169]: Accepted publickey for core from 147.75.109.163 port 60218 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:21:59.250566 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:59.265912 systemd-logind[1445]: New session 14 of user core. Jul 2 00:21:59.274442 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:21:59.453357 sshd[4169]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:59.458923 systemd[1]: sshd@40-64.23.228.240:22-147.75.109.163:60218.service: Deactivated successfully. Jul 2 00:21:59.461784 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:21:59.464504 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:21:59.466823 systemd-logind[1445]: Removed session 14. Jul 2 00:22:01.197245 systemd[1]: Started sshd@41-64.23.228.240:22-103.82.240.189:60832.service - OpenSSH per-connection server daemon (103.82.240.189:60832). Jul 2 00:22:02.259836 sshd[4182]: Invalid user erpnext from 103.82.240.189 port 60832 Jul 2 00:22:02.462057 sshd[4182]: Received disconnect from 103.82.240.189 port 60832:11: Bye Bye [preauth] Jul 2 00:22:02.462057 sshd[4182]: Disconnected from invalid user erpnext 103.82.240.189 port 60832 [preauth] Jul 2 00:22:02.474730 systemd[1]: sshd@41-64.23.228.240:22-103.82.240.189:60832.service: Deactivated successfully. Jul 2 00:22:04.475962 systemd[1]: Started sshd@42-64.23.228.240:22-147.75.109.163:51010.service - OpenSSH per-connection server daemon (147.75.109.163:51010). Jul 2 00:22:04.575516 sshd[4187]: Accepted publickey for core from 147.75.109.163 port 51010 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:04.577509 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:04.585816 systemd-logind[1445]: New session 15 of user core. Jul 2 00:22:04.593123 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:22:04.823650 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:04.829777 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:22:04.830497 systemd[1]: sshd@42-64.23.228.240:22-147.75.109.163:51010.service: Deactivated successfully. Jul 2 00:22:04.833717 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:22:04.837460 systemd-logind[1445]: Removed session 15. Jul 2 00:22:06.152762 kubelet[2635]: E0702 00:22:06.152608 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:09.849476 systemd[1]: Started sshd@43-64.23.228.240:22-147.75.109.163:51022.service - OpenSSH per-connection server daemon (147.75.109.163:51022). Jul 2 00:22:09.895256 sshd[4200]: Accepted publickey for core from 147.75.109.163 port 51022 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:09.897539 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:09.903718 systemd-logind[1445]: New session 16 of user core. Jul 2 00:22:09.914823 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:22:10.099801 sshd[4200]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:10.115188 systemd[1]: sshd@43-64.23.228.240:22-147.75.109.163:51022.service: Deactivated successfully. Jul 2 00:22:10.120186 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:22:10.128632 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:22:10.135105 systemd[1]: Started sshd@44-64.23.228.240:22-147.75.109.163:51030.service - OpenSSH per-connection server daemon (147.75.109.163:51030). Jul 2 00:22:10.137165 systemd-logind[1445]: Removed session 16. Jul 2 00:22:10.153296 kubelet[2635]: E0702 00:22:10.153229 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:10.196442 sshd[4213]: Accepted publickey for core from 147.75.109.163 port 51030 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:10.200687 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:10.210123 systemd-logind[1445]: New session 17 of user core. Jul 2 00:22:10.221747 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:22:10.764665 sshd[4213]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:10.778303 systemd[1]: sshd@44-64.23.228.240:22-147.75.109.163:51030.service: Deactivated successfully. Jul 2 00:22:10.782734 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:22:10.786774 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:22:10.796938 systemd[1]: Started sshd@45-64.23.228.240:22-147.75.109.163:51040.service - OpenSSH per-connection server daemon (147.75.109.163:51040). Jul 2 00:22:10.799871 systemd-logind[1445]: Removed session 17. Jul 2 00:22:10.869628 sshd[4224]: Accepted publickey for core from 147.75.109.163 port 51040 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:10.872592 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:10.880907 systemd-logind[1445]: New session 18 of user core. Jul 2 00:22:10.892239 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:22:12.162944 systemd[1]: Started sshd@46-64.23.228.240:22-43.156.152.211:54690.service - OpenSSH per-connection server daemon (43.156.152.211:54690). Jul 2 00:22:13.125597 sshd[4224]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:13.139585 systemd[1]: sshd@45-64.23.228.240:22-147.75.109.163:51040.service: Deactivated successfully. Jul 2 00:22:13.142864 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:22:13.147219 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:22:13.157690 kubelet[2635]: E0702 00:22:13.157641 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:13.162221 systemd[1]: Started sshd@47-64.23.228.240:22-147.75.109.163:36538.service - OpenSSH per-connection server daemon (147.75.109.163:36538). Jul 2 00:22:13.165583 systemd-logind[1445]: Removed session 18. Jul 2 00:22:13.240589 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 36538 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:13.244284 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:13.254631 systemd-logind[1445]: New session 19 of user core. Jul 2 00:22:13.260920 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:22:13.674168 sshd[4236]: Received disconnect from 43.156.152.211 port 54690:11: Bye Bye [preauth] Jul 2 00:22:13.674168 sshd[4236]: Disconnected from authenticating user root 43.156.152.211 port 54690 [preauth] Jul 2 00:22:13.680223 systemd[1]: sshd@46-64.23.228.240:22-43.156.152.211:54690.service: Deactivated successfully. Jul 2 00:22:13.851387 sshd[4246]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:13.883109 systemd[1]: sshd@47-64.23.228.240:22-147.75.109.163:36538.service: Deactivated successfully. Jul 2 00:22:13.889231 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:22:13.894889 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:22:13.908359 systemd[1]: Started sshd@48-64.23.228.240:22-147.75.109.163:36544.service - OpenSSH per-connection server daemon (147.75.109.163:36544). Jul 2 00:22:13.912264 systemd-logind[1445]: Removed session 19. Jul 2 00:22:13.967808 sshd[4260]: Accepted publickey for core from 147.75.109.163 port 36544 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:13.970174 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:13.990803 systemd-logind[1445]: New session 20 of user core. Jul 2 00:22:13.993803 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:22:14.281633 sshd[4260]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:14.291978 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:22:14.294714 systemd[1]: sshd@48-64.23.228.240:22-147.75.109.163:36544.service: Deactivated successfully. Jul 2 00:22:14.299321 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:22:14.301976 systemd-logind[1445]: Removed session 20. Jul 2 00:22:19.312973 systemd[1]: Started sshd@49-64.23.228.240:22-147.75.109.163:36546.service - OpenSSH per-connection server daemon (147.75.109.163:36546). Jul 2 00:22:19.388505 sshd[4272]: Accepted publickey for core from 147.75.109.163 port 36546 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:19.391018 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:19.399154 systemd-logind[1445]: New session 21 of user core. Jul 2 00:22:19.408108 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:22:19.665222 sshd[4272]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:19.677210 systemd[1]: sshd@49-64.23.228.240:22-147.75.109.163:36546.service: Deactivated successfully. Jul 2 00:22:19.683490 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:22:19.685061 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:22:19.699701 systemd-logind[1445]: Removed session 21. Jul 2 00:22:19.908152 update_engine[1446]: I0702 00:22:19.907485 1446 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:22:19.908152 update_engine[1446]: I0702 00:22:19.907603 1446 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:22:19.915964 update_engine[1446]: I0702 00:22:19.915756 1446 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:22:19.943057 update_engine[1446]: I0702 00:22:19.917152 1446 omaha_request_params.cc:62] Current group set to beta Jul 2 00:22:19.943057 update_engine[1446]: I0702 00:22:19.930691 1446 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.945770 1446 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.945835 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.945959 1446 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.946100 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.946110 1446 omaha_request_action.cc:272] Request: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: Jul 2 00:22:19.954406 update_engine[1446]: I0702 00:22:19.946118 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:22:19.974171 update_engine[1446]: I0702 00:22:19.969399 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:22:19.974171 update_engine[1446]: I0702 00:22:19.970046 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:22:19.974171 update_engine[1446]: E0702 00:22:19.972726 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:22:19.974171 update_engine[1446]: I0702 00:22:19.972837 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:22:19.980142 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:22:21.152951 kubelet[2635]: E0702 00:22:21.152052 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:22.112088 systemd[1]: Started sshd@50-64.23.228.240:22-112.6.122.181:44406.service - OpenSSH per-connection server daemon (112.6.122.181:44406). Jul 2 00:22:22.138947 systemd[1]: Started sshd@51-64.23.228.240:22-43.156.68.109:39474.service - OpenSSH per-connection server daemon (43.156.68.109:39474). Jul 2 00:22:23.224462 sshd[4288]: Invalid user deploy from 112.6.122.181 port 44406 Jul 2 00:22:23.437501 sshd[4288]: Received disconnect from 112.6.122.181 port 44406:11: Bye Bye [preauth] Jul 2 00:22:23.437501 sshd[4288]: Disconnected from invalid user deploy 112.6.122.181 port 44406 [preauth] Jul 2 00:22:23.440909 systemd[1]: sshd@50-64.23.228.240:22-112.6.122.181:44406.service: Deactivated successfully. Jul 2 00:22:23.715559 sshd[4291]: Received disconnect from 43.156.68.109 port 39474:11: Bye Bye [preauth] Jul 2 00:22:23.715559 sshd[4291]: Disconnected from authenticating user root 43.156.68.109 port 39474 [preauth] Jul 2 00:22:23.719464 systemd[1]: sshd@51-64.23.228.240:22-43.156.68.109:39474.service: Deactivated successfully. Jul 2 00:22:24.688842 systemd[1]: Started sshd@52-64.23.228.240:22-147.75.109.163:33642.service - OpenSSH per-connection server daemon (147.75.109.163:33642). Jul 2 00:22:24.737717 sshd[4298]: Accepted publickey for core from 147.75.109.163 port 33642 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:24.741217 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:24.748924 systemd-logind[1445]: New session 22 of user core. Jul 2 00:22:24.755017 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:22:24.922016 sshd[4298]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:24.927151 systemd[1]: sshd@52-64.23.228.240:22-147.75.109.163:33642.service: Deactivated successfully. Jul 2 00:22:24.931700 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:22:24.935655 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:22:24.937666 systemd-logind[1445]: Removed session 22. Jul 2 00:22:25.919020 systemd[1]: Started sshd@53-64.23.228.240:22-43.134.124.145:42184.service - OpenSSH per-connection server daemon (43.134.124.145:42184). Jul 2 00:22:27.442550 sshd[4311]: Received disconnect from 43.134.124.145 port 42184:11: Bye Bye [preauth] Jul 2 00:22:27.442550 sshd[4311]: Disconnected from authenticating user root 43.134.124.145 port 42184 [preauth] Jul 2 00:22:27.445433 systemd[1]: sshd@53-64.23.228.240:22-43.134.124.145:42184.service: Deactivated successfully. Jul 2 00:22:29.825135 update_engine[1446]: I0702 00:22:29.825052 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:22:29.826349 update_engine[1446]: I0702 00:22:29.825325 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:22:29.826349 update_engine[1446]: I0702 00:22:29.825909 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:22:29.826446 update_engine[1446]: E0702 00:22:29.826360 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:22:29.826446 update_engine[1446]: I0702 00:22:29.826442 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:22:29.940168 systemd[1]: Started sshd@54-64.23.228.240:22-147.75.109.163:33658.service - OpenSSH per-connection server daemon (147.75.109.163:33658). Jul 2 00:22:29.986467 sshd[4318]: Accepted publickey for core from 147.75.109.163 port 33658 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:29.988561 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:29.995497 systemd-logind[1445]: New session 23 of user core. Jul 2 00:22:30.001978 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:22:30.169540 sshd[4318]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:30.180321 systemd[1]: sshd@54-64.23.228.240:22-147.75.109.163:33658.service: Deactivated successfully. Jul 2 00:22:30.184651 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:22:30.186546 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:22:30.188589 systemd-logind[1445]: Removed session 23. Jul 2 00:22:31.771930 systemd[1]: Started sshd@55-64.23.228.240:22-43.153.223.232:59962.service - OpenSSH per-connection server daemon (43.153.223.232:59962). Jul 2 00:22:33.275372 sshd[4332]: Received disconnect from 43.153.223.232 port 59962:11: Bye Bye [preauth] Jul 2 00:22:33.275372 sshd[4332]: Disconnected from authenticating user root 43.153.223.232 port 59962 [preauth] Jul 2 00:22:33.279314 systemd[1]: sshd@55-64.23.228.240:22-43.153.223.232:59962.service: Deactivated successfully. Jul 2 00:22:35.189070 systemd[1]: Started sshd@56-64.23.228.240:22-147.75.109.163:55236.service - OpenSSH per-connection server daemon (147.75.109.163:55236). Jul 2 00:22:35.259474 sshd[4337]: Accepted publickey for core from 147.75.109.163 port 55236 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:35.262402 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:35.271174 systemd-logind[1445]: New session 24 of user core. Jul 2 00:22:35.276753 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:22:35.447278 sshd[4337]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:35.453552 systemd[1]: sshd@56-64.23.228.240:22-147.75.109.163:55236.service: Deactivated successfully. Jul 2 00:22:35.457280 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:22:35.459794 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:22:35.462039 systemd-logind[1445]: Removed session 24. Jul 2 00:22:39.477074 systemd[1]: Started sshd@57-64.23.228.240:22-43.163.214.38:53368.service - OpenSSH per-connection server daemon (43.163.214.38:53368). Jul 2 00:22:39.839815 update_engine[1446]: I0702 00:22:39.832937 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:22:39.839815 update_engine[1446]: I0702 00:22:39.833302 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:22:39.839815 update_engine[1446]: I0702 00:22:39.838356 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:22:39.839815 update_engine[1446]: E0702 00:22:39.838884 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:22:39.839815 update_engine[1446]: I0702 00:22:39.838968 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:22:40.339338 sshd[4350]: Invalid user ubuntu from 43.163.214.38 port 53368 Jul 2 00:22:40.475124 systemd[1]: Started sshd@58-64.23.228.240:22-147.75.109.163:55248.service - OpenSSH per-connection server daemon (147.75.109.163:55248). Jul 2 00:22:40.526603 sshd[4350]: Received disconnect from 43.163.214.38 port 53368:11: Bye Bye [preauth] Jul 2 00:22:40.526853 sshd[4350]: Disconnected from invalid user ubuntu 43.163.214.38 port 53368 [preauth] Jul 2 00:22:40.530618 systemd[1]: sshd@57-64.23.228.240:22-43.163.214.38:53368.service: Deactivated successfully. Jul 2 00:22:40.601725 sshd[4353]: Accepted publickey for core from 147.75.109.163 port 55248 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:40.603968 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:40.630179 systemd-logind[1445]: New session 25 of user core. Jul 2 00:22:40.648258 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:22:40.995657 sshd[4353]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:41.013277 systemd[1]: sshd@58-64.23.228.240:22-147.75.109.163:55248.service: Deactivated successfully. Jul 2 00:22:41.019362 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:22:41.021217 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:22:41.026845 systemd-logind[1445]: Removed session 25. Jul 2 00:22:46.018196 systemd[1]: Started sshd@59-64.23.228.240:22-147.75.109.163:33854.service - OpenSSH per-connection server daemon (147.75.109.163:33854). Jul 2 00:22:46.075587 sshd[4370]: Accepted publickey for core from 147.75.109.163 port 33854 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:46.078049 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:46.087700 systemd-logind[1445]: New session 26 of user core. Jul 2 00:22:46.092847 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:22:46.196303 systemd[1]: Started sshd@60-64.23.228.240:22-43.134.0.65:51934.service - OpenSSH per-connection server daemon (43.134.0.65:51934). Jul 2 00:22:46.293329 sshd[4370]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:46.306074 systemd[1]: sshd@59-64.23.228.240:22-147.75.109.163:33854.service: Deactivated successfully. Jul 2 00:22:46.310674 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:22:46.314383 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:22:46.322120 systemd[1]: Started sshd@61-64.23.228.240:22-147.75.109.163:33862.service - OpenSSH per-connection server daemon (147.75.109.163:33862). Jul 2 00:22:46.323870 systemd-logind[1445]: Removed session 26. Jul 2 00:22:46.422943 sshd[4386]: Accepted publickey for core from 147.75.109.163 port 33862 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:46.423812 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:46.429802 systemd-logind[1445]: New session 27 of user core. Jul 2 00:22:46.435909 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:22:47.433160 sshd[4380]: Invalid user ftpadmin from 43.134.0.65 port 51934 Jul 2 00:22:47.523006 systemd[1]: Started sshd@62-64.23.228.240:22-103.82.240.189:46704.service - OpenSSH per-connection server daemon (103.82.240.189:46704). Jul 2 00:22:47.667729 sshd[4380]: Received disconnect from 43.134.0.65 port 51934:11: Bye Bye [preauth] Jul 2 00:22:47.667729 sshd[4380]: Disconnected from invalid user ftpadmin 43.134.0.65 port 51934 [preauth] Jul 2 00:22:47.671225 systemd[1]: sshd@60-64.23.228.240:22-43.134.0.65:51934.service: Deactivated successfully. Jul 2 00:22:48.026180 containerd[1470]: time="2024-07-02T00:22:48.026087506Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:22:48.079699 containerd[1470]: time="2024-07-02T00:22:48.079287896Z" level=info msg="StopContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" with timeout 30 (s)" Jul 2 00:22:48.080326 containerd[1470]: time="2024-07-02T00:22:48.080143123Z" level=info msg="StopContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" with timeout 2 (s)" Jul 2 00:22:48.081022 containerd[1470]: time="2024-07-02T00:22:48.080902549Z" level=info msg="Stop container \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" with signal terminated" Jul 2 00:22:48.081548 containerd[1470]: time="2024-07-02T00:22:48.081142140Z" level=info msg="Stop container \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" with signal terminated" Jul 2 00:22:48.096030 systemd-networkd[1368]: lxc_health: Link DOWN Jul 2 00:22:48.096042 systemd-networkd[1368]: lxc_health: Lost carrier Jul 2 00:22:48.116782 systemd[1]: cri-containerd-4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80.scope: Deactivated successfully. Jul 2 00:22:48.136983 systemd[1]: cri-containerd-6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5.scope: Deactivated successfully. Jul 2 00:22:48.138161 systemd[1]: cri-containerd-6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5.scope: Consumed 11.595s CPU time. Jul 2 00:22:48.184625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5-rootfs.mount: Deactivated successfully. Jul 2 00:22:48.190158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80-rootfs.mount: Deactivated successfully. Jul 2 00:22:48.207472 containerd[1470]: time="2024-07-02T00:22:48.207094970Z" level=info msg="shim disconnected" id=6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5 namespace=k8s.io Jul 2 00:22:48.208360 containerd[1470]: time="2024-07-02T00:22:48.207699416Z" level=info msg="shim disconnected" id=4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80 namespace=k8s.io Jul 2 00:22:48.208360 containerd[1470]: time="2024-07-02T00:22:48.207761499Z" level=warning msg="cleaning up after shim disconnected" id=4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80 namespace=k8s.io Jul 2 00:22:48.208360 containerd[1470]: time="2024-07-02T00:22:48.207777824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:48.208360 containerd[1470]: time="2024-07-02T00:22:48.208061715Z" level=warning msg="cleaning up after shim disconnected" id=6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5 namespace=k8s.io Jul 2 00:22:48.208360 containerd[1470]: time="2024-07-02T00:22:48.208243262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:48.255628 containerd[1470]: time="2024-07-02T00:22:48.255556469Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:22:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:22:48.272793 containerd[1470]: time="2024-07-02T00:22:48.272496318Z" level=info msg="StopContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" returns successfully" Jul 2 00:22:48.273441 containerd[1470]: time="2024-07-02T00:22:48.273014088Z" level=info msg="StopContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" returns successfully" Jul 2 00:22:48.273982 containerd[1470]: time="2024-07-02T00:22:48.273891091Z" level=info msg="StopPodSandbox for \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.274250324Z" level=info msg="StopPodSandbox for \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.278507284Z" level=info msg="Container to stop \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.278590907Z" level=info msg="Container to stop \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.278607575Z" level=info msg="Container to stop \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.278624562Z" level=info msg="Container to stop \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.278641465Z" level=info msg="Container to stop \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.282005 containerd[1470]: time="2024-07-02T00:22:48.274295672Z" level=info msg="Container to stop \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:22:48.285085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274-shm.mount: Deactivated successfully. Jul 2 00:22:48.293280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386-shm.mount: Deactivated successfully. Jul 2 00:22:48.302753 systemd[1]: cri-containerd-5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274.scope: Deactivated successfully. Jul 2 00:22:48.315101 systemd[1]: cri-containerd-71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386.scope: Deactivated successfully. Jul 2 00:22:48.358235 containerd[1470]: time="2024-07-02T00:22:48.358150620Z" level=info msg="shim disconnected" id=5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274 namespace=k8s.io Jul 2 00:22:48.358681 containerd[1470]: time="2024-07-02T00:22:48.358602521Z" level=warning msg="cleaning up after shim disconnected" id=5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274 namespace=k8s.io Jul 2 00:22:48.358681 containerd[1470]: time="2024-07-02T00:22:48.358642856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:48.364682 containerd[1470]: time="2024-07-02T00:22:48.364566411Z" level=info msg="shim disconnected" id=71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386 namespace=k8s.io Jul 2 00:22:48.364682 containerd[1470]: time="2024-07-02T00:22:48.364665492Z" level=warning msg="cleaning up after shim disconnected" id=71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386 namespace=k8s.io Jul 2 00:22:48.364682 containerd[1470]: time="2024-07-02T00:22:48.364679662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:48.407026 containerd[1470]: time="2024-07-02T00:22:48.406011766Z" level=info msg="TearDown network for sandbox \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" successfully" Jul 2 00:22:48.407026 containerd[1470]: time="2024-07-02T00:22:48.406073325Z" level=info msg="StopPodSandbox for \"5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274\" returns successfully" Jul 2 00:22:48.414038 containerd[1470]: time="2024-07-02T00:22:48.412876149Z" level=info msg="TearDown network for sandbox \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\" successfully" Jul 2 00:22:48.414038 containerd[1470]: time="2024-07-02T00:22:48.412922616Z" level=info msg="StopPodSandbox for \"71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386\" returns successfully" Jul 2 00:22:48.514867 sshd[4395]: Invalid user user2 from 103.82.240.189 port 46704 Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548505 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpldd\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-kube-api-access-fpldd\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548601 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-bpf-maps\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548633 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-xtables-lock\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548660 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cni-path\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548688 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-run\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.549817 kubelet[2635]: I0702 00:22:48.548730 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5662d66-0c07-4ace-a464-ea82897a6149-clustermesh-secrets\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548766 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-cilium-config-path\") pod \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\" (UID: \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548803 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-lib-modules\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548837 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-config-path\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548868 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-kernel\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548900 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-net\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.550740 kubelet[2635]: I0702 00:22:48.548933 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-cgroup\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.551028 kubelet[2635]: I0702 00:22:48.548963 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-hostproc\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.551028 kubelet[2635]: I0702 00:22:48.549008 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5bmk\" (UniqueName: \"kubernetes.io/projected/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-kube-api-access-l5bmk\") pod \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\" (UID: \"e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea\") " Jul 2 00:22:48.551028 kubelet[2635]: I0702 00:22:48.549047 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-hubble-tls\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.551028 kubelet[2635]: I0702 00:22:48.549077 2635 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-etc-cni-netd\") pod \"c5662d66-0c07-4ace-a464-ea82897a6149\" (UID: \"c5662d66-0c07-4ace-a464-ea82897a6149\") " Jul 2 00:22:48.553590 kubelet[2635]: I0702 00:22:48.549215 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.553590 kubelet[2635]: I0702 00:22:48.552685 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.553590 kubelet[2635]: I0702 00:22:48.552720 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.553590 kubelet[2635]: I0702 00:22:48.552746 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cni-path" (OuterVolumeSpecName: "cni-path") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.553590 kubelet[2635]: I0702 00:22:48.552773 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.561589 kubelet[2635]: I0702 00:22:48.559554 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5662d66-0c07-4ace-a464-ea82897a6149-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:22:48.563555 kubelet[2635]: I0702 00:22:48.563362 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" (UID: "e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:22:48.563824 kubelet[2635]: I0702 00:22:48.563594 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.563929 kubelet[2635]: I0702 00:22:48.563664 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.564063 kubelet[2635]: I0702 00:22:48.564040 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.564176 kubelet[2635]: I0702 00:22:48.564159 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-hostproc" (OuterVolumeSpecName: "hostproc") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.566142 kubelet[2635]: I0702 00:22:48.566094 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:22:48.567867 kubelet[2635]: I0702 00:22:48.567805 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-kube-api-access-fpldd" (OuterVolumeSpecName: "kube-api-access-fpldd") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "kube-api-access-fpldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:22:48.569857 kubelet[2635]: I0702 00:22:48.569779 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:22:48.572178 kubelet[2635]: I0702 00:22:48.572093 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c5662d66-0c07-4ace-a464-ea82897a6149" (UID: "c5662d66-0c07-4ace-a464-ea82897a6149"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:22:48.572520 kubelet[2635]: I0702 00:22:48.572189 2635 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-kube-api-access-l5bmk" (OuterVolumeSpecName: "kube-api-access-l5bmk") pod "e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" (UID: "e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea"). InnerVolumeSpecName "kube-api-access-l5bmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:22:48.612872 kubelet[2635]: E0702 00:22:48.612796 2635 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:22:48.650606 kubelet[2635]: I0702 00:22:48.650543 2635 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-etc-cni-netd\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.650868 kubelet[2635]: I0702 00:22:48.650846 2635 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fpldd\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-kube-api-access-fpldd\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651004 kubelet[2635]: I0702 00:22:48.650984 2635 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-bpf-maps\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651077 kubelet[2635]: I0702 00:22:48.651068 2635 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-xtables-lock\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651135 kubelet[2635]: I0702 00:22:48.651128 2635 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cni-path\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651187 kubelet[2635]: I0702 00:22:48.651180 2635 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-run\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651243 kubelet[2635]: I0702 00:22:48.651234 2635 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5662d66-0c07-4ace-a464-ea82897a6149-clustermesh-secrets\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651310 kubelet[2635]: I0702 00:22:48.651300 2635 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-cilium-config-path\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651379 kubelet[2635]: I0702 00:22:48.651371 2635 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-lib-modules\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651493 kubelet[2635]: I0702 00:22:48.651479 2635 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-cgroup\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651606 kubelet[2635]: I0702 00:22:48.651591 2635 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5662d66-0c07-4ace-a464-ea82897a6149-cilium-config-path\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651708 kubelet[2635]: I0702 00:22:48.651692 2635 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-kernel\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651884 kubelet[2635]: I0702 00:22:48.651805 2635 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-host-proc-sys-net\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651884 kubelet[2635]: I0702 00:22:48.651827 2635 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5662d66-0c07-4ace-a464-ea82897a6149-hubble-tls\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651884 kubelet[2635]: I0702 00:22:48.651842 2635 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5662d66-0c07-4ace-a464-ea82897a6149-hostproc\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.651884 kubelet[2635]: I0702 00:22:48.651860 2635 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l5bmk\" (UniqueName: \"kubernetes.io/projected/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea-kube-api-access-l5bmk\") on node \"ci-3975.1.1-9-82cbb2c548\" DevicePath \"\"" Jul 2 00:22:48.701058 sshd[4395]: Received disconnect from 103.82.240.189 port 46704:11: Bye Bye [preauth] Jul 2 00:22:48.701058 sshd[4395]: Disconnected from invalid user user2 103.82.240.189 port 46704 [preauth] Jul 2 00:22:48.703734 systemd[1]: sshd@62-64.23.228.240:22-103.82.240.189:46704.service: Deactivated successfully. Jul 2 00:22:48.969727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71ae3b5e66c86ae20c104fa9a3b343bbba20afecaa3d3038ffda968d770cc386-rootfs.mount: Deactivated successfully. Jul 2 00:22:48.969904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cca516054fa1829ea3884e88835de4d61b005d63ac13e8a80bc896ae8c18274-rootfs.mount: Deactivated successfully. Jul 2 00:22:48.969971 systemd[1]: var-lib-kubelet-pods-e9881cce\x2d6c64\x2d4e1d\x2d85e6\x2db0cbdad5e8ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5bmk.mount: Deactivated successfully. Jul 2 00:22:48.970045 systemd[1]: var-lib-kubelet-pods-c5662d66\x2d0c07\x2d4ace\x2da464\x2dea82897a6149-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpldd.mount: Deactivated successfully. Jul 2 00:22:48.970113 systemd[1]: var-lib-kubelet-pods-c5662d66\x2d0c07\x2d4ace\x2da464\x2dea82897a6149-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:22:48.970175 systemd[1]: var-lib-kubelet-pods-c5662d66\x2d0c07\x2d4ace\x2da464\x2dea82897a6149-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:22:49.159188 systemd[1]: Removed slice kubepods-burstable-podc5662d66_0c07_4ace_a464_ea82897a6149.slice - libcontainer container kubepods-burstable-podc5662d66_0c07_4ace_a464_ea82897a6149.slice. Jul 2 00:22:49.159350 systemd[1]: kubepods-burstable-podc5662d66_0c07_4ace_a464_ea82897a6149.slice: Consumed 11.735s CPU time. Jul 2 00:22:49.178469 kubelet[2635]: E0702 00:22:49.178400 2635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fsw6x" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" Jul 2 00:22:49.186503 kubelet[2635]: I0702 00:22:49.184836 2635 scope.go:117] "RemoveContainer" containerID="6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5" Jul 2 00:22:49.190443 containerd[1470]: time="2024-07-02T00:22:49.188549190Z" level=info msg="RemoveContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\"" Jul 2 00:22:49.206096 systemd[1]: Removed slice kubepods-besteffort-pode9881cce_6c64_4e1d_85e6_b0cbdad5e8ea.slice - libcontainer container kubepods-besteffort-pode9881cce_6c64_4e1d_85e6_b0cbdad5e8ea.slice. Jul 2 00:22:49.222617 containerd[1470]: time="2024-07-02T00:22:49.221741867Z" level=info msg="RemoveContainer for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" returns successfully" Jul 2 00:22:49.223327 kubelet[2635]: I0702 00:22:49.222290 2635 scope.go:117] "RemoveContainer" containerID="a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff" Jul 2 00:22:49.225193 containerd[1470]: time="2024-07-02T00:22:49.225129566Z" level=info msg="RemoveContainer for \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\"" Jul 2 00:22:49.234748 containerd[1470]: time="2024-07-02T00:22:49.233770383Z" level=info msg="RemoveContainer for \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\" returns successfully" Jul 2 00:22:49.234927 kubelet[2635]: I0702 00:22:49.234123 2635 scope.go:117] "RemoveContainer" containerID="3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae" Jul 2 00:22:49.236723 containerd[1470]: time="2024-07-02T00:22:49.236511215Z" level=info msg="RemoveContainer for \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\"" Jul 2 00:22:49.241589 containerd[1470]: time="2024-07-02T00:22:49.240837141Z" level=info msg="RemoveContainer for \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\" returns successfully" Jul 2 00:22:49.242154 kubelet[2635]: I0702 00:22:49.242114 2635 scope.go:117] "RemoveContainer" containerID="c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe" Jul 2 00:22:49.244482 containerd[1470]: time="2024-07-02T00:22:49.244430804Z" level=info msg="RemoveContainer for \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\"" Jul 2 00:22:49.262003 containerd[1470]: time="2024-07-02T00:22:49.261905224Z" level=info msg="RemoveContainer for \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\" returns successfully" Jul 2 00:22:49.264607 kubelet[2635]: I0702 00:22:49.262576 2635 scope.go:117] "RemoveContainer" containerID="24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87" Jul 2 00:22:49.265810 containerd[1470]: time="2024-07-02T00:22:49.265741078Z" level=info msg="RemoveContainer for \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\"" Jul 2 00:22:49.274270 containerd[1470]: time="2024-07-02T00:22:49.273869881Z" level=info msg="RemoveContainer for \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\" returns successfully" Jul 2 00:22:49.274595 kubelet[2635]: I0702 00:22:49.274238 2635 scope.go:117] "RemoveContainer" containerID="6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5" Jul 2 00:22:49.275073 containerd[1470]: time="2024-07-02T00:22:49.274924261Z" level=error msg="ContainerStatus for \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\": not found" Jul 2 00:22:49.279901 kubelet[2635]: E0702 00:22:49.279821 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\": not found" containerID="6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5" Jul 2 00:22:49.292507 kubelet[2635]: I0702 00:22:49.292361 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5"} err="failed to get container status \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e504fdfa9c63ae8e0dbfdef633bcafb6d0b47c82f2c0819ff6b83e4daecc5f5\": not found" Jul 2 00:22:49.292507 kubelet[2635]: I0702 00:22:49.292462 2635 scope.go:117] "RemoveContainer" containerID="a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff" Jul 2 00:22:49.293267 containerd[1470]: time="2024-07-02T00:22:49.292946368Z" level=error msg="ContainerStatus for \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\": not found" Jul 2 00:22:49.294313 kubelet[2635]: E0702 00:22:49.293801 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\": not found" containerID="a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff" Jul 2 00:22:49.294313 kubelet[2635]: I0702 00:22:49.293864 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff"} err="failed to get container status \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"a38c243396a925fabd7a686b8e1b97f99e77df7731d98d8c058d4e826fc0d0ff\": not found" Jul 2 00:22:49.294313 kubelet[2635]: I0702 00:22:49.293881 2635 scope.go:117] "RemoveContainer" containerID="3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae" Jul 2 00:22:49.294515 containerd[1470]: time="2024-07-02T00:22:49.294227444Z" level=error msg="ContainerStatus for \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\": not found" Jul 2 00:22:49.294566 kubelet[2635]: E0702 00:22:49.294478 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\": not found" containerID="3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae" Jul 2 00:22:49.294566 kubelet[2635]: I0702 00:22:49.294528 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae"} err="failed to get container status \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b07ebef3ce4346451c8f3cfbfb0e703d0d56854b7df2a86faa0f75bc58e25ae\": not found" Jul 2 00:22:49.294566 kubelet[2635]: I0702 00:22:49.294546 2635 scope.go:117] "RemoveContainer" containerID="c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe" Jul 2 00:22:49.295143 containerd[1470]: time="2024-07-02T00:22:49.294900580Z" level=error msg="ContainerStatus for \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\": not found" Jul 2 00:22:49.295273 kubelet[2635]: E0702 00:22:49.295065 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\": not found" containerID="c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe" Jul 2 00:22:49.295273 kubelet[2635]: I0702 00:22:49.295099 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe"} err="failed to get container status \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4a268278b94b71ea444d8dffb8b44c85c645762c14d01a6cc314309336791fe\": not found" Jul 2 00:22:49.295273 kubelet[2635]: I0702 00:22:49.295115 2635 scope.go:117] "RemoveContainer" containerID="24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87" Jul 2 00:22:49.295768 containerd[1470]: time="2024-07-02T00:22:49.295693024Z" level=error msg="ContainerStatus for \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\": not found" Jul 2 00:22:49.295909 kubelet[2635]: E0702 00:22:49.295855 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\": not found" containerID="24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87" Jul 2 00:22:49.295909 kubelet[2635]: I0702 00:22:49.295892 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87"} err="failed to get container status \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\": rpc error: code = NotFound desc = an error occurred when try to find container \"24a0f57504f2288ded465b334cd3805d6ac25d58aa9a2b7f570cf40bdbf16a87\": not found" Jul 2 00:22:49.295909 kubelet[2635]: I0702 00:22:49.295911 2635 scope.go:117] "RemoveContainer" containerID="4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80" Jul 2 00:22:49.297791 containerd[1470]: time="2024-07-02T00:22:49.297617949Z" level=info msg="RemoveContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\"" Jul 2 00:22:49.312061 containerd[1470]: time="2024-07-02T00:22:49.311956580Z" level=info msg="RemoveContainer for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" returns successfully" Jul 2 00:22:49.312768 kubelet[2635]: I0702 00:22:49.312500 2635 scope.go:117] "RemoveContainer" containerID="4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80" Jul 2 00:22:49.312931 containerd[1470]: time="2024-07-02T00:22:49.312882660Z" level=error msg="ContainerStatus for \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\": not found" Jul 2 00:22:49.313129 kubelet[2635]: E0702 00:22:49.313102 2635 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\": not found" containerID="4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80" Jul 2 00:22:49.313265 kubelet[2635]: I0702 00:22:49.313166 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80"} err="failed to get container status \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\": rpc error: code = NotFound desc = an error occurred when try to find container \"4804b2d0e80cdc31e542da81818807e30936de151ef5a07353a8a6cb2031ce80\": not found" Jul 2 00:22:49.825137 update_engine[1446]: I0702 00:22:49.825049 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:22:49.826344 update_engine[1446]: I0702 00:22:49.825600 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:22:49.826344 update_engine[1446]: I0702 00:22:49.825898 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:22:49.828041 update_engine[1446]: E0702 00:22:49.827982 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:22:49.828237 update_engine[1446]: I0702 00:22:49.828073 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:22:49.828237 update_engine[1446]: I0702 00:22:49.828083 1446 omaha_request_action.cc:617] Omaha request response: Jul 2 00:22:49.828237 update_engine[1446]: E0702 00:22:49.828202 1446 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 00:22:49.828237 update_engine[1446]: I0702 00:22:49.828236 1446 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 00:22:49.828237 update_engine[1446]: I0702 00:22:49.828241 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828245 1446 update_attempter.cc:306] Processing Done. Jul 2 00:22:49.828476 update_engine[1446]: E0702 00:22:49.828260 1446 update_attempter.cc:619] Update failed. Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828265 1446 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828269 1446 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828274 1446 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828347 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828367 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828370 1446 omaha_request_action.cc:272] Request: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: Jul 2 00:22:49.828476 update_engine[1446]: I0702 00:22:49.828375 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:22:49.829051 update_engine[1446]: I0702 00:22:49.828555 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:22:49.829051 update_engine[1446]: I0702 00:22:49.828802 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:22:49.831107 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 00:22:49.831708 update_engine[1446]: E0702 00:22:49.831171 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831248 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831256 1446 omaha_request_action.cc:617] Omaha request response: Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831265 1446 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831271 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831277 1446 update_attempter.cc:306] Processing Done. Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831284 1446 update_attempter.cc:310] Error event sent. Jul 2 00:22:49.831708 update_engine[1446]: I0702 00:22:49.831296 1446 update_check_scheduler.cc:74] Next update check in 47m31s Jul 2 00:22:49.832584 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 00:22:49.842838 sshd[4386]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:49.855053 systemd[1]: sshd@61-64.23.228.240:22-147.75.109.163:33862.service: Deactivated successfully. Jul 2 00:22:49.858824 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:22:49.862150 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:22:49.872003 systemd[1]: Started sshd@63-64.23.228.240:22-147.75.109.163:33874.service - OpenSSH per-connection server daemon (147.75.109.163:33874). Jul 2 00:22:49.873195 systemd-logind[1445]: Removed session 27. Jul 2 00:22:49.926246 sshd[4549]: Accepted publickey for core from 147.75.109.163 port 33874 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:49.929723 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:49.937855 systemd-logind[1445]: New session 28 of user core. Jul 2 00:22:49.944765 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:22:50.854195 sshd[4549]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:50.871107 systemd[1]: sshd@63-64.23.228.240:22-147.75.109.163:33874.service: Deactivated successfully. Jul 2 00:22:50.878217 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:22:50.882946 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:22:50.897061 systemd[1]: Started sshd@64-64.23.228.240:22-147.75.109.163:33876.service - OpenSSH per-connection server daemon (147.75.109.163:33876). Jul 2 00:22:50.898160 systemd-logind[1445]: Removed session 28. Jul 2 00:22:50.914472 kubelet[2635]: I0702 00:22:50.913860 2635 topology_manager.go:215] "Topology Admit Handler" podUID="babbb8e8-765e-445f-a228-5afbdc14e34d" podNamespace="kube-system" podName="cilium-m8j6d" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928034 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="mount-cgroup" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928090 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="apply-sysctl-overwrites" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928106 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="mount-bpf-fs" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928124 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="cilium-agent" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928137 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" containerName="cilium-operator" Jul 2 00:22:50.928636 kubelet[2635]: E0702 00:22:50.928150 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="clean-cilium-state" Jul 2 00:22:50.939488 kubelet[2635]: I0702 00:22:50.938129 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" containerName="cilium-agent" Jul 2 00:22:50.939488 kubelet[2635]: I0702 00:22:50.938202 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" containerName="cilium-operator" Jul 2 00:22:50.978285 sshd[4561]: Accepted publickey for core from 147.75.109.163 port 33876 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:50.981687 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:51.004252 systemd-logind[1445]: New session 29 of user core. Jul 2 00:22:51.010344 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:22:51.028765 systemd[1]: Created slice kubepods-burstable-podbabbb8e8_765e_445f_a228_5afbdc14e34d.slice - libcontainer container kubepods-burstable-podbabbb8e8_765e_445f_a228_5afbdc14e34d.slice. Jul 2 00:22:51.086460 kubelet[2635]: I0702 00:22:51.086384 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-cilium-cgroup\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.088935 sshd[4561]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:51.091396 kubelet[2635]: I0702 00:22:51.091263 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/babbb8e8-765e-445f-a228-5afbdc14e34d-hubble-tls\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.091841 kubelet[2635]: I0702 00:22:51.091811 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-bpf-maps\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093592 kubelet[2635]: I0702 00:22:51.093537 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/babbb8e8-765e-445f-a228-5afbdc14e34d-cilium-config-path\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093712 kubelet[2635]: I0702 00:22:51.093634 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kpl\" (UniqueName: \"kubernetes.io/projected/babbb8e8-765e-445f-a228-5afbdc14e34d-kube-api-access-57kpl\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093712 kubelet[2635]: I0702 00:22:51.093663 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-host-proc-sys-net\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093712 kubelet[2635]: I0702 00:22:51.093699 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-cni-path\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093720 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-etc-cni-netd\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093770 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-lib-modules\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093798 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/babbb8e8-765e-445f-a228-5afbdc14e34d-cilium-ipsec-secrets\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093850 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-cilium-run\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093871 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-hostproc\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.093900 kubelet[2635]: I0702 00:22:51.093900 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-host-proc-sys-kernel\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.094190 kubelet[2635]: I0702 00:22:51.093937 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/babbb8e8-765e-445f-a228-5afbdc14e34d-clustermesh-secrets\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.094190 kubelet[2635]: I0702 00:22:51.093969 2635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/babbb8e8-765e-445f-a228-5afbdc14e34d-xtables-lock\") pod \"cilium-m8j6d\" (UID: \"babbb8e8-765e-445f-a228-5afbdc14e34d\") " pod="kube-system/cilium-m8j6d" Jul 2 00:22:51.102565 systemd[1]: sshd@64-64.23.228.240:22-147.75.109.163:33876.service: Deactivated successfully. Jul 2 00:22:51.108186 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:22:51.112139 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:22:51.118868 systemd[1]: Started sshd@65-64.23.228.240:22-147.75.109.163:33882.service - OpenSSH per-connection server daemon (147.75.109.163:33882). Jul 2 00:22:51.122044 systemd-logind[1445]: Removed session 29. Jul 2 00:22:51.152592 kubelet[2635]: E0702 00:22:51.152052 2635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fsw6x" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" Jul 2 00:22:51.159967 kubelet[2635]: I0702 00:22:51.159910 2635 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c5662d66-0c07-4ace-a464-ea82897a6149" path="/var/lib/kubelet/pods/c5662d66-0c07-4ace-a464-ea82897a6149/volumes" Jul 2 00:22:51.162062 kubelet[2635]: I0702 00:22:51.162020 2635 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea" path="/var/lib/kubelet/pods/e9881cce-6c64-4e1d-85e6-b0cbdad5e8ea/volumes" Jul 2 00:22:51.183481 sshd[4569]: Accepted publickey for core from 147.75.109.163 port 33882 ssh2: RSA SHA256:b6ZVLwDYJnsRBH9JIX2JDazYtmNrgVfBC4H2Y4nzn9k Jul 2 00:22:51.186640 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:51.195371 systemd-logind[1445]: New session 30 of user core. Jul 2 00:22:51.202686 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:22:51.352892 kubelet[2635]: E0702 00:22:51.352837 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:51.356748 containerd[1470]: time="2024-07-02T00:22:51.356352207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8j6d,Uid:babbb8e8-765e-445f-a228-5afbdc14e34d,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:51.424475 containerd[1470]: time="2024-07-02T00:22:51.420367832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:51.424475 containerd[1470]: time="2024-07-02T00:22:51.420544377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.424475 containerd[1470]: time="2024-07-02T00:22:51.420581851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:51.424475 containerd[1470]: time="2024-07-02T00:22:51.420607637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:51.471297 systemd[1]: Started cri-containerd-f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903.scope - libcontainer container f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903. Jul 2 00:22:51.513916 containerd[1470]: time="2024-07-02T00:22:51.513842658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8j6d,Uid:babbb8e8-765e-445f-a228-5afbdc14e34d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\"" Jul 2 00:22:51.515505 kubelet[2635]: E0702 00:22:51.515159 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:51.531176 containerd[1470]: time="2024-07-02T00:22:51.530657073Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:22:51.555905 containerd[1470]: time="2024-07-02T00:22:51.555827789Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe\"" Jul 2 00:22:51.559128 containerd[1470]: time="2024-07-02T00:22:51.558856260Z" level=info msg="StartContainer for \"21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe\"" Jul 2 00:22:51.602767 systemd[1]: Started cri-containerd-21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe.scope - libcontainer container 21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe. Jul 2 00:22:51.650781 containerd[1470]: time="2024-07-02T00:22:51.650631668Z" level=info msg="StartContainer for \"21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe\" returns successfully" Jul 2 00:22:51.668757 systemd[1]: cri-containerd-21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe.scope: Deactivated successfully. Jul 2 00:22:51.722058 containerd[1470]: time="2024-07-02T00:22:51.720870012Z" level=info msg="shim disconnected" id=21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe namespace=k8s.io Jul 2 00:22:51.722058 containerd[1470]: time="2024-07-02T00:22:51.720960444Z" level=warning msg="cleaning up after shim disconnected" id=21a081103a1835a7a10d9cb3259fec95bac22fbf7285c48ca19bf09cb79a7dfe namespace=k8s.io Jul 2 00:22:51.722058 containerd[1470]: time="2024-07-02T00:22:51.720977686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:52.277850 kubelet[2635]: E0702 00:22:52.277238 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:52.287768 containerd[1470]: time="2024-07-02T00:22:52.287692389Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:22:52.309929 containerd[1470]: time="2024-07-02T00:22:52.309855434Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807\"" Jul 2 00:22:52.312967 containerd[1470]: time="2024-07-02T00:22:52.312809180Z" level=info msg="StartContainer for \"e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807\"" Jul 2 00:22:52.363901 systemd[1]: Started cri-containerd-e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807.scope - libcontainer container e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807. Jul 2 00:22:52.422223 containerd[1470]: time="2024-07-02T00:22:52.420477328Z" level=info msg="StartContainer for \"e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807\" returns successfully" Jul 2 00:22:52.432674 systemd[1]: cri-containerd-e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807.scope: Deactivated successfully. Jul 2 00:22:52.465602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807-rootfs.mount: Deactivated successfully. Jul 2 00:22:52.471994 containerd[1470]: time="2024-07-02T00:22:52.471889188Z" level=info msg="shim disconnected" id=e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807 namespace=k8s.io Jul 2 00:22:52.471994 containerd[1470]: time="2024-07-02T00:22:52.471982566Z" level=warning msg="cleaning up after shim disconnected" id=e67f098b609967de89f3031910d68218d4c3323b02d18e0cc300973847b5e807 namespace=k8s.io Jul 2 00:22:52.471994 containerd[1470]: time="2024-07-02T00:22:52.472003245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:52.494247 containerd[1470]: time="2024-07-02T00:22:52.494103969Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:22:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:22:53.151852 kubelet[2635]: E0702 00:22:53.151689 2635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fsw6x" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" Jul 2 00:22:53.282539 kubelet[2635]: E0702 00:22:53.282491 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:53.286887 containerd[1470]: time="2024-07-02T00:22:53.286359577Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:22:53.315930 containerd[1470]: time="2024-07-02T00:22:53.315824806Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6\"" Jul 2 00:22:53.319553 containerd[1470]: time="2024-07-02T00:22:53.319462372Z" level=info msg="StartContainer for \"2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6\"" Jul 2 00:22:53.380403 systemd[1]: Started cri-containerd-2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6.scope - libcontainer container 2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6. Jul 2 00:22:53.431598 containerd[1470]: time="2024-07-02T00:22:53.431380093Z" level=info msg="StartContainer for \"2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6\" returns successfully" Jul 2 00:22:53.443443 systemd[1]: cri-containerd-2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6.scope: Deactivated successfully. Jul 2 00:22:53.498270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6-rootfs.mount: Deactivated successfully. Jul 2 00:22:53.502462 containerd[1470]: time="2024-07-02T00:22:53.501393666Z" level=info msg="shim disconnected" id=2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6 namespace=k8s.io Jul 2 00:22:53.502462 containerd[1470]: time="2024-07-02T00:22:53.501525941Z" level=warning msg="cleaning up after shim disconnected" id=2b99604d317fdba465638970a6e702226c1af790d9665eaf3a6323834d706bc6 namespace=k8s.io Jul 2 00:22:53.502462 containerd[1470]: time="2024-07-02T00:22:53.501544600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:53.614277 kubelet[2635]: E0702 00:22:53.614180 2635 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:22:54.288117 kubelet[2635]: E0702 00:22:54.288057 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:54.295170 containerd[1470]: time="2024-07-02T00:22:54.295100643Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:22:54.334182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676771293.mount: Deactivated successfully. Jul 2 00:22:54.337262 containerd[1470]: time="2024-07-02T00:22:54.337182457Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e\"" Jul 2 00:22:54.339103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740847129.mount: Deactivated successfully. Jul 2 00:22:54.340322 containerd[1470]: time="2024-07-02T00:22:54.338173136Z" level=info msg="StartContainer for \"a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e\"" Jul 2 00:22:54.406844 systemd[1]: Started cri-containerd-a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e.scope - libcontainer container a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e. Jul 2 00:22:54.448974 systemd[1]: cri-containerd-a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e.scope: Deactivated successfully. Jul 2 00:22:54.454319 containerd[1470]: time="2024-07-02T00:22:54.454163999Z" level=info msg="StartContainer for \"a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e\" returns successfully" Jul 2 00:22:54.495120 containerd[1470]: time="2024-07-02T00:22:54.495009305Z" level=info msg="shim disconnected" id=a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e namespace=k8s.io Jul 2 00:22:54.495120 containerd[1470]: time="2024-07-02T00:22:54.495107738Z" level=warning msg="cleaning up after shim disconnected" id=a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e namespace=k8s.io Jul 2 00:22:54.495484 containerd[1470]: time="2024-07-02T00:22:54.495141657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:55.152603 kubelet[2635]: E0702 00:22:55.152067 2635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fsw6x" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" Jul 2 00:22:55.299110 kubelet[2635]: E0702 00:22:55.299068 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:55.306087 containerd[1470]: time="2024-07-02T00:22:55.305142544Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:22:55.323417 systemd[1]: run-containerd-runc-k8s.io-a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e-runc.RaPtii.mount: Deactivated successfully. Jul 2 00:22:55.323597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a540067e6c68ac363d56a916e514ff60dc4f39f42ed5cc206396c3e650ce073e-rootfs.mount: Deactivated successfully. Jul 2 00:22:55.344320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804361993.mount: Deactivated successfully. Jul 2 00:22:55.352359 containerd[1470]: time="2024-07-02T00:22:55.352094148Z" level=info msg="CreateContainer within sandbox \"f91504f5062b434f23e11114bd42fa0876120028fb224a8aeb529ea52bbbc903\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b\"" Jul 2 00:22:55.353663 containerd[1470]: time="2024-07-02T00:22:55.353596698Z" level=info msg="StartContainer for \"fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b\"" Jul 2 00:22:55.418789 systemd[1]: Started cri-containerd-fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b.scope - libcontainer container fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b. Jul 2 00:22:55.527146 containerd[1470]: time="2024-07-02T00:22:55.526991919Z" level=info msg="StartContainer for \"fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b\" returns successfully" Jul 2 00:22:55.638772 kubelet[2635]: I0702 00:22:55.638714 2635 setters.go:568] "Node became not ready" node="ci-3975.1.1-9-82cbb2c548" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:22:55Z","lastTransitionTime":"2024-07-02T00:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:22:56.315672 kubelet[2635]: E0702 00:22:56.315605 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:56.383898 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:22:57.153470 kubelet[2635]: E0702 00:22:57.151697 2635 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fsw6x" podUID="e5439f5a-d36f-4f2c-8340-872248bf73c4" Jul 2 00:22:57.356124 kubelet[2635]: E0702 00:22:57.354874 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:57.650596 systemd[1]: Started sshd@66-64.23.228.240:22-190.181.4.12:48778.service - OpenSSH per-connection server daemon (190.181.4.12:48778). Jul 2 00:22:58.037754 systemd[1]: run-containerd-runc-k8s.io-fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b-runc.62Z48M.mount: Deactivated successfully. Jul 2 00:22:58.152228 kubelet[2635]: E0702 00:22:58.151754 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:59.153138 kubelet[2635]: E0702 00:22:59.152138 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:22:59.212844 sshd[5007]: Received disconnect from 190.181.4.12 port 48778:11: Bye Bye [preauth] Jul 2 00:22:59.212844 sshd[5007]: Disconnected from authenticating user root 190.181.4.12 port 48778 [preauth] Jul 2 00:22:59.219243 systemd[1]: sshd@66-64.23.228.240:22-190.181.4.12:48778.service: Deactivated successfully. Jul 2 00:23:00.209361 systemd-networkd[1368]: lxc_health: Link UP Jul 2 00:23:00.234719 systemd-networkd[1368]: lxc_health: Gained carrier Jul 2 00:23:00.357849 systemd[1]: run-containerd-runc-k8s.io-fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b-runc.Xa9cb9.mount: Deactivated successfully. Jul 2 00:23:01.155167 kubelet[2635]: E0702 00:23:01.154311 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:23:01.368978 kubelet[2635]: E0702 00:23:01.368927 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:23:01.472591 kubelet[2635]: I0702 00:23:01.472180 2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m8j6d" podStartSLOduration=11.472110765 podStartE2EDuration="11.472110765s" podCreationTimestamp="2024-07-02 00:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:56.348811997 +0000 UTC m=+133.704620654" watchObservedRunningTime="2024-07-02 00:23:01.472110765 +0000 UTC m=+138.827919429" Jul 2 00:23:01.633734 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 2 00:23:02.114009 systemd[1]: Started sshd@67-64.23.228.240:22-43.156.152.211:41536.service - OpenSSH per-connection server daemon (43.156.152.211:41536). Jul 2 00:23:02.407781 kubelet[2635]: E0702 00:23:02.391210 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:23:02.792051 systemd[1]: run-containerd-runc-k8s.io-fcaa3f90616805ee0e4fab4cfbf2b9532c2c467e3511260b4d97abd9f415232b-runc.Y4BZrB.mount: Deactivated successfully. Jul 2 00:23:03.394497 kubelet[2635]: E0702 00:23:03.393574 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 00:23:03.824027 sshd[5477]: Received disconnect from 43.156.152.211 port 41536:11: Bye Bye [preauth] Jul 2 00:23:03.824027 sshd[5477]: Disconnected from authenticating user root 43.156.152.211 port 41536 [preauth] Jul 2 00:23:03.828091 systemd[1]: sshd@67-64.23.228.240:22-43.156.152.211:41536.service: Deactivated successfully. Jul 2 00:23:05.416671 sshd[4569]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:05.431448 systemd[1]: sshd@65-64.23.228.240:22-147.75.109.163:33882.service: Deactivated successfully. Jul 2 00:23:05.438207 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:23:05.443063 systemd-logind[1445]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:23:05.447254 systemd-logind[1445]: Removed session 30.