Jun 26 07:16:16.355749 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 26 07:16:16.355810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:16:16.355839 kernel: BIOS-provided physical RAM map: Jun 26 07:16:16.355857 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 26 07:16:16.355877 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 26 07:16:16.355897 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 26 07:16:16.355920 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jun 26 07:16:16.355941 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jun 26 07:16:16.355963 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 26 07:16:16.355988 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 26 07:16:16.356006 kernel: NX (Execute Disable) protection: active Jun 26 07:16:16.356027 kernel: APIC: Static calls initialized Jun 26 07:16:16.356048 kernel: SMBIOS 2.8 present. Jun 26 07:16:16.356071 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 26 07:16:16.356092 kernel: Hypervisor detected: KVM Jun 26 07:16:16.356124 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 26 07:16:16.356211 kernel: kvm-clock: using sched offset of 6919648839 cycles Jun 26 07:16:16.356236 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 26 07:16:16.356254 kernel: tsc: Detected 2294.606 MHz processor Jun 26 07:16:16.356280 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 26 07:16:16.356305 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 26 07:16:16.356324 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jun 26 07:16:16.356346 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 26 07:16:16.356371 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 26 07:16:16.356401 kernel: ACPI: Early table checksum verification disabled Jun 26 07:16:16.356424 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jun 26 07:16:16.356449 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356473 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356493 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356517 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 26 07:16:16.356542 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356562 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356587 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356617 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 26 07:16:16.356639 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 26 07:16:16.356664 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 26 07:16:16.356688 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 26 07:16:16.356710 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 26 07:16:16.356733 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 26 07:16:16.356759 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 26 07:16:16.356797 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 26 07:16:16.356823 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 26 07:16:16.356849 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 26 07:16:16.356872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 26 07:16:16.356899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 26 07:16:16.356926 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jun 26 07:16:16.356948 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jun 26 07:16:16.356980 kernel: Zone ranges: Jun 26 07:16:16.357007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 26 07:16:16.357028 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jun 26 07:16:16.357055 kernel: Normal empty Jun 26 07:16:16.357081 kernel: Movable zone start for each node Jun 26 07:16:16.357103 kernel: Early memory node ranges Jun 26 07:16:16.359182 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 26 07:16:16.359236 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jun 26 07:16:16.359266 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jun 26 07:16:16.359293 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 26 07:16:16.359307 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 26 07:16:16.359330 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jun 26 07:16:16.359354 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 26 07:16:16.359379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 26 07:16:16.359394 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 26 07:16:16.359406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 26 07:16:16.359419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 26 07:16:16.359433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 26 07:16:16.359453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 26 07:16:16.359468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 26 07:16:16.359485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 26 07:16:16.359512 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 26 07:16:16.359538 kernel: TSC deadline timer available Jun 26 07:16:16.359564 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 26 07:16:16.359594 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 26 07:16:16.359607 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 26 07:16:16.359621 kernel: Booting paravirtualized kernel on KVM Jun 26 07:16:16.359642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 26 07:16:16.359654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 26 07:16:16.359667 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 26 07:16:16.359682 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 26 07:16:16.359704 kernel: pcpu-alloc: [0] 0 1 Jun 26 07:16:16.359724 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 26 07:16:16.359747 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:16:16.359767 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 26 07:16:16.359802 kernel: random: crng init done Jun 26 07:16:16.359818 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 26 07:16:16.359831 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 26 07:16:16.359844 kernel: Fallback order for Node 0: 0 Jun 26 07:16:16.359858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jun 26 07:16:16.359870 kernel: Policy zone: DMA32 Jun 26 07:16:16.359883 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 26 07:16:16.359896 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 26 07:16:16.359910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 26 07:16:16.359939 kernel: Kernel/User page tables isolation: enabled Jun 26 07:16:16.359968 kernel: ftrace: allocating 37650 entries in 148 pages Jun 26 07:16:16.360001 kernel: ftrace: allocated 148 pages with 3 groups Jun 26 07:16:16.360018 kernel: Dynamic Preempt: voluntary Jun 26 07:16:16.360034 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 26 07:16:16.360056 kernel: rcu: RCU event tracing is enabled. Jun 26 07:16:16.360086 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 26 07:16:16.360102 kernel: Trampoline variant of Tasks RCU enabled. Jun 26 07:16:16.360115 kernel: Rude variant of Tasks RCU enabled. Jun 26 07:16:16.360171 kernel: Tracing variant of Tasks RCU enabled. Jun 26 07:16:16.360184 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 26 07:16:16.360197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 26 07:16:16.360212 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 26 07:16:16.360226 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 26 07:16:16.360249 kernel: Console: colour VGA+ 80x25 Jun 26 07:16:16.360269 kernel: printk: console [tty0] enabled Jun 26 07:16:16.360289 kernel: printk: console [ttyS0] enabled Jun 26 07:16:16.360309 kernel: ACPI: Core revision 20230628 Jun 26 07:16:16.360350 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 26 07:16:16.360364 kernel: APIC: Switch to symmetric I/O mode setup Jun 26 07:16:16.360377 kernel: x2apic enabled Jun 26 07:16:16.360403 kernel: APIC: Switched APIC routing to: physical x2apic Jun 26 07:16:16.360427 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 26 07:16:16.360449 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jun 26 07:16:16.360469 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Jun 26 07:16:16.360497 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 26 07:16:16.360513 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 26 07:16:16.360547 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 26 07:16:16.360565 kernel: Spectre V2 : Mitigation: Retpolines Jun 26 07:16:16.360593 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 26 07:16:16.360612 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 26 07:16:16.360630 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 26 07:16:16.360644 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 26 07:16:16.360660 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 26 07:16:16.360674 kernel: MDS: Mitigation: Clear CPU buffers Jun 26 07:16:16.360687 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 26 07:16:16.360706 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 26 07:16:16.360720 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 26 07:16:16.360733 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 26 07:16:16.360745 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 26 07:16:16.360758 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 26 07:16:16.360770 kernel: Freeing SMP alternatives memory: 32K Jun 26 07:16:16.360783 kernel: pid_max: default: 32768 minimum: 301 Jun 26 07:16:16.360797 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 26 07:16:16.360817 kernel: SELinux: Initializing. Jun 26 07:16:16.360833 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:16:16.360848 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 26 07:16:16.360862 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 26 07:16:16.360875 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:16:16.360888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:16:16.360901 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 26 07:16:16.360914 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 26 07:16:16.360932 kernel: signal: max sigframe size: 1776 Jun 26 07:16:16.360946 kernel: rcu: Hierarchical SRCU implementation. Jun 26 07:16:16.360960 kernel: rcu: Max phase no-delay instances is 400. Jun 26 07:16:16.360972 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 26 07:16:16.360985 kernel: smp: Bringing up secondary CPUs ... Jun 26 07:16:16.360997 kernel: smpboot: x86: Booting SMP configuration: Jun 26 07:16:16.361011 kernel: .... node #0, CPUs: #1 Jun 26 07:16:16.361023 kernel: smp: Brought up 1 node, 2 CPUs Jun 26 07:16:16.361036 kernel: smpboot: Max logical packages: 1 Jun 26 07:16:16.361050 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Jun 26 07:16:16.361072 kernel: devtmpfs: initialized Jun 26 07:16:16.361087 kernel: x86/mm: Memory block size: 128MB Jun 26 07:16:16.361101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 26 07:16:16.361114 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 26 07:16:16.363218 kernel: pinctrl core: initialized pinctrl subsystem Jun 26 07:16:16.363282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 26 07:16:16.363313 kernel: audit: initializing netlink subsys (disabled) Jun 26 07:16:16.363344 kernel: audit: type=2000 audit(1719386174.121:1): state=initialized audit_enabled=0 res=1 Jun 26 07:16:16.363370 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 26 07:16:16.363412 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 26 07:16:16.363427 kernel: cpuidle: using governor menu Jun 26 07:16:16.363441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 26 07:16:16.363455 kernel: dca service started, version 1.12.1 Jun 26 07:16:16.363469 kernel: PCI: Using configuration type 1 for base access Jun 26 07:16:16.363482 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 26 07:16:16.363495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 26 07:16:16.363509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 26 07:16:16.363536 kernel: ACPI: Added _OSI(Module Device) Jun 26 07:16:16.363568 kernel: ACPI: Added _OSI(Processor Device) Jun 26 07:16:16.363584 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 26 07:16:16.363599 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 26 07:16:16.363619 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 26 07:16:16.363647 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 26 07:16:16.363675 kernel: ACPI: Interpreter enabled Jun 26 07:16:16.363689 kernel: ACPI: PM: (supports S0 S5) Jun 26 07:16:16.363702 kernel: ACPI: Using IOAPIC for interrupt routing Jun 26 07:16:16.363715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 26 07:16:16.363734 kernel: PCI: Using E820 reservations for host bridge windows Jun 26 07:16:16.363755 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 26 07:16:16.363784 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 26 07:16:16.364226 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 26 07:16:16.364454 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 26 07:16:16.364653 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 26 07:16:16.364682 kernel: acpiphp: Slot [3] registered Jun 26 07:16:16.364713 kernel: acpiphp: Slot [4] registered Jun 26 07:16:16.364735 kernel: acpiphp: Slot [5] registered Jun 26 07:16:16.364761 kernel: acpiphp: Slot [6] registered Jun 26 07:16:16.364782 kernel: acpiphp: Slot [7] registered Jun 26 07:16:16.364821 kernel: acpiphp: Slot [8] registered Jun 26 07:16:16.364843 kernel: acpiphp: Slot [9] registered Jun 26 07:16:16.364869 kernel: acpiphp: Slot [10] registered Jun 26 07:16:16.364892 kernel: acpiphp: Slot [11] registered Jun 26 07:16:16.364915 kernel: acpiphp: Slot [12] registered Jun 26 07:16:16.364939 kernel: acpiphp: Slot [13] registered Jun 26 07:16:16.364969 kernel: acpiphp: Slot [14] registered Jun 26 07:16:16.364991 kernel: acpiphp: Slot [15] registered Jun 26 07:16:16.365017 kernel: acpiphp: Slot [16] registered Jun 26 07:16:16.365031 kernel: acpiphp: Slot [17] registered Jun 26 07:16:16.365045 kernel: acpiphp: Slot [18] registered Jun 26 07:16:16.365062 kernel: acpiphp: Slot [19] registered Jun 26 07:16:16.365085 kernel: acpiphp: Slot [20] registered Jun 26 07:16:16.365099 kernel: acpiphp: Slot [21] registered Jun 26 07:16:16.365114 kernel: acpiphp: Slot [22] registered Jun 26 07:16:16.365127 kernel: acpiphp: Slot [23] registered Jun 26 07:16:16.367239 kernel: acpiphp: Slot [24] registered Jun 26 07:16:16.367282 kernel: acpiphp: Slot [25] registered Jun 26 07:16:16.367301 kernel: acpiphp: Slot [26] registered Jun 26 07:16:16.367323 kernel: acpiphp: Slot [27] registered Jun 26 07:16:16.367337 kernel: acpiphp: Slot [28] registered Jun 26 07:16:16.367352 kernel: acpiphp: Slot [29] registered Jun 26 07:16:16.367367 kernel: acpiphp: Slot [30] registered Jun 26 07:16:16.367384 kernel: acpiphp: Slot [31] registered Jun 26 07:16:16.367405 kernel: PCI host bridge to bus 0000:00 Jun 26 07:16:16.367669 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 26 07:16:16.367808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 26 07:16:16.367938 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 26 07:16:16.368067 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 26 07:16:16.368296 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 26 07:16:16.368530 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 26 07:16:16.368732 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 26 07:16:16.368930 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 26 07:16:16.371278 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 26 07:16:16.371584 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 26 07:16:16.371765 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 26 07:16:16.371939 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 26 07:16:16.372121 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 26 07:16:16.372355 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 26 07:16:16.372571 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 26 07:16:16.372763 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 26 07:16:16.372964 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 26 07:16:16.375280 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 26 07:16:16.375525 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 26 07:16:16.375775 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 26 07:16:16.375979 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 26 07:16:16.376186 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 26 07:16:16.376360 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 26 07:16:16.376544 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 26 07:16:16.376718 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 26 07:16:16.376927 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:16:16.377121 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 26 07:16:16.378471 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 26 07:16:16.378696 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 26 07:16:16.378935 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 26 07:16:16.379119 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 26 07:16:16.380528 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 26 07:16:16.380753 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 26 07:16:16.380946 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 26 07:16:16.382361 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 26 07:16:16.382624 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 26 07:16:16.382850 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 26 07:16:16.383042 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:16:16.384382 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 26 07:16:16.384575 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 26 07:16:16.384763 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 26 07:16:16.384947 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 26 07:16:16.385119 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 26 07:16:16.387505 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 26 07:16:16.387723 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 26 07:16:16.387943 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 26 07:16:16.388116 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 26 07:16:16.388359 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 26 07:16:16.388385 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 26 07:16:16.388400 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 26 07:16:16.388414 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 26 07:16:16.388430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 26 07:16:16.388445 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 26 07:16:16.388459 kernel: iommu: Default domain type: Translated Jun 26 07:16:16.388472 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 26 07:16:16.388495 kernel: PCI: Using ACPI for IRQ routing Jun 26 07:16:16.388509 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 26 07:16:16.388525 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 26 07:16:16.388547 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jun 26 07:16:16.388767 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 26 07:16:16.388983 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 26 07:16:16.392489 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 26 07:16:16.392552 kernel: vgaarb: loaded Jun 26 07:16:16.392592 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 26 07:16:16.392614 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 26 07:16:16.392638 kernel: clocksource: Switched to clocksource kvm-clock Jun 26 07:16:16.392658 kernel: VFS: Disk quotas dquot_6.6.0 Jun 26 07:16:16.392685 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 26 07:16:16.392707 kernel: pnp: PnP ACPI init Jun 26 07:16:16.392729 kernel: pnp: PnP ACPI: found 4 devices Jun 26 07:16:16.392757 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 26 07:16:16.392784 kernel: NET: Registered PF_INET protocol family Jun 26 07:16:16.392811 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 26 07:16:16.392838 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 26 07:16:16.392867 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 26 07:16:16.392890 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 26 07:16:16.392915 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 26 07:16:16.392944 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 26 07:16:16.392968 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:16:16.392993 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 26 07:16:16.393022 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 26 07:16:16.393053 kernel: NET: Registered PF_XDP protocol family Jun 26 07:16:16.393414 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 26 07:16:16.393601 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 26 07:16:16.393738 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 26 07:16:16.393869 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 26 07:16:16.394019 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 26 07:16:16.394295 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 26 07:16:16.394520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 26 07:16:16.394852 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 26 07:16:16.395076 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 45004 usecs Jun 26 07:16:16.395106 kernel: PCI: CLS 0 bytes, default 64 Jun 26 07:16:16.396542 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 26 07:16:16.396593 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jun 26 07:16:16.396610 kernel: Initialise system trusted keyrings Jun 26 07:16:16.396625 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 26 07:16:16.396643 kernel: Key type asymmetric registered Jun 26 07:16:16.396659 kernel: Asymmetric key parser 'x509' registered Jun 26 07:16:16.396690 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 26 07:16:16.396705 kernel: io scheduler mq-deadline registered Jun 26 07:16:16.396723 kernel: io scheduler kyber registered Jun 26 07:16:16.396742 kernel: io scheduler bfq registered Jun 26 07:16:16.396757 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 26 07:16:16.396771 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 26 07:16:16.396785 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 26 07:16:16.396801 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 26 07:16:16.396815 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 26 07:16:16.396837 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 26 07:16:16.396851 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 26 07:16:16.396864 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 26 07:16:16.396878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 26 07:16:16.400399 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 26 07:16:16.400461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 26 07:16:16.400628 kernel: rtc_cmos 00:03: registered as rtc0 Jun 26 07:16:16.400782 kernel: rtc_cmos 00:03: setting system clock to 2024-06-26T07:16:15 UTC (1719386175) Jun 26 07:16:16.400934 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 26 07:16:16.400981 kernel: intel_pstate: CPU model not supported Jun 26 07:16:16.401004 kernel: NET: Registered PF_INET6 protocol family Jun 26 07:16:16.401026 kernel: Segment Routing with IPv6 Jun 26 07:16:16.401048 kernel: In-situ OAM (IOAM) with IPv6 Jun 26 07:16:16.401069 kernel: NET: Registered PF_PACKET protocol family Jun 26 07:16:16.401091 kernel: Key type dns_resolver registered Jun 26 07:16:16.401112 kernel: IPI shorthand broadcast: enabled Jun 26 07:16:16.401134 kernel: sched_clock: Marking stable (1666007229, 243294068)->(2150448221, -241146924) Jun 26 07:16:16.401190 kernel: registered taskstats version 1 Jun 26 07:16:16.401213 kernel: Loading compiled-in X.509 certificates Jun 26 07:16:16.401238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 26 07:16:16.401259 kernel: Key type .fscrypt registered Jun 26 07:16:16.401284 kernel: Key type fscrypt-provisioning registered Jun 26 07:16:16.401307 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 26 07:16:16.401329 kernel: ima: Allocated hash algorithm: sha1 Jun 26 07:16:16.401350 kernel: ima: No architecture policies found Jun 26 07:16:16.401378 kernel: clk: Disabling unused clocks Jun 26 07:16:16.401403 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 26 07:16:16.401425 kernel: Write protecting the kernel read-only data: 36864k Jun 26 07:16:16.401447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 26 07:16:16.401494 kernel: Run /init as init process Jun 26 07:16:16.401521 kernel: with arguments: Jun 26 07:16:16.401543 kernel: /init Jun 26 07:16:16.401566 kernel: with environment: Jun 26 07:16:16.401588 kernel: HOME=/ Jun 26 07:16:16.401614 kernel: TERM=linux Jun 26 07:16:16.401640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 26 07:16:16.401671 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:16:16.401697 systemd[1]: Detected virtualization kvm. Jun 26 07:16:16.401720 systemd[1]: Detected architecture x86-64. Jun 26 07:16:16.401743 systemd[1]: Running in initrd. Jun 26 07:16:16.401766 systemd[1]: No hostname configured, using default hostname. Jun 26 07:16:16.401788 systemd[1]: Hostname set to . Jun 26 07:16:16.401816 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:16:16.401839 systemd[1]: Queued start job for default target initrd.target. Jun 26 07:16:16.401862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:16:16.401885 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:16:16.401913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 26 07:16:16.401937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:16:16.401960 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 26 07:16:16.401983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 26 07:16:16.402013 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 26 07:16:16.402037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 26 07:16:16.402061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:16:16.402085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:16:16.402110 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:16:16.402150 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:16:16.402191 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:16:16.402220 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:16:16.402247 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:16:16.402273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:16:16.402299 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 26 07:16:16.402324 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 26 07:16:16.402358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:16:16.402378 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:16:16.402395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:16:16.402416 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:16:16.402442 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 26 07:16:16.402468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:16:16.402492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 26 07:16:16.402514 systemd[1]: Starting systemd-fsck-usr.service... Jun 26 07:16:16.402554 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:16:16.402591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:16:16.402678 systemd-journald[184]: Collecting audit messages is disabled. Jun 26 07:16:16.402747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:16.402778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 26 07:16:16.402802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:16:16.402822 systemd[1]: Finished systemd-fsck-usr.service. Jun 26 07:16:16.402846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 26 07:16:16.402877 systemd-journald[184]: Journal started Jun 26 07:16:16.402919 systemd-journald[184]: Runtime Journal (/run/log/journal/7e2f0b271ae544678abec559dee99a3e) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:16:16.404701 systemd-modules-load[185]: Inserted module 'overlay' Jun 26 07:16:16.450166 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 26 07:16:16.453206 kernel: Bridge firewalling registered Jun 26 07:16:16.452993 systemd-modules-load[185]: Inserted module 'br_netfilter' Jun 26 07:16:16.509612 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:16:16.511970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:16:16.515237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:16.525037 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 26 07:16:16.537650 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:16:16.547523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:16:16.554024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:16:16.571848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:16:16.579674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:16:16.596717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:16:16.607335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:16:16.615575 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 26 07:16:16.617035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:16:16.634867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:16:16.663120 dracut-cmdline[216]: dracut-dracut-053 Jun 26 07:16:16.672707 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 26 07:16:16.697839 systemd-resolved[218]: Positive Trust Anchors: Jun 26 07:16:16.697859 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:16:16.697942 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:16:16.703719 systemd-resolved[218]: Defaulting to hostname 'linux'. Jun 26 07:16:16.707311 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:16:16.709102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:16:16.862206 kernel: SCSI subsystem initialized Jun 26 07:16:16.881230 kernel: Loading iSCSI transport class v2.0-870. Jun 26 07:16:16.903215 kernel: iscsi: registered transport (tcp) Jun 26 07:16:16.947788 kernel: iscsi: registered transport (qla4xxx) Jun 26 07:16:16.947893 kernel: QLogic iSCSI HBA Driver Jun 26 07:16:17.057563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 26 07:16:17.068559 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 26 07:16:17.141099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 26 07:16:17.141254 kernel: device-mapper: uevent: version 1.0.3 Jun 26 07:16:17.145207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 26 07:16:17.238234 kernel: raid6: avx2x4 gen() 12830 MB/s Jun 26 07:16:17.251212 kernel: raid6: avx2x2 gen() 13034 MB/s Jun 26 07:16:17.269769 kernel: raid6: avx2x1 gen() 9433 MB/s Jun 26 07:16:17.269969 kernel: raid6: using algorithm avx2x2 gen() 13034 MB/s Jun 26 07:16:17.290484 kernel: raid6: .... xor() 11433 MB/s, rmw enabled Jun 26 07:16:17.290610 kernel: raid6: using avx2x2 recovery algorithm Jun 26 07:16:17.342380 kernel: xor: automatically using best checksumming function avx Jun 26 07:16:17.672222 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 26 07:16:17.707256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:16:17.717747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:16:17.759850 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jun 26 07:16:17.773353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:16:17.782836 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 26 07:16:17.845183 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jun 26 07:16:17.927385 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:16:17.941513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:16:18.039518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:16:18.052520 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 26 07:16:18.098276 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 26 07:16:18.109391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:16:18.112056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:16:18.112944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:16:18.121506 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 26 07:16:18.168739 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:16:18.212852 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 26 07:16:18.293348 kernel: scsi host0: Virtio SCSI HBA Jun 26 07:16:18.293667 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 26 07:16:18.293871 kernel: cryptd: max_cpu_qlen set to 1000 Jun 26 07:16:18.293899 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 26 07:16:18.293925 kernel: GPT:9289727 != 125829119 Jun 26 07:16:18.293957 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 26 07:16:18.293982 kernel: GPT:9289727 != 125829119 Jun 26 07:16:18.294006 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:16:18.294030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:16:18.294060 kernel: libata version 3.00 loaded. Jun 26 07:16:18.294085 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 26 07:16:18.372312 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 26 07:16:18.372605 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jun 26 07:16:18.372817 kernel: AVX2 version of gcm_enc/dec engaged. Jun 26 07:16:18.372846 kernel: scsi host1: ata_piix Jun 26 07:16:18.373110 kernel: AES CTR mode by8 optimization enabled Jun 26 07:16:18.374853 kernel: ACPI: bus type USB registered Jun 26 07:16:18.374945 kernel: usbcore: registered new interface driver usbfs Jun 26 07:16:18.374973 kernel: scsi host2: ata_piix Jun 26 07:16:18.375353 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 26 07:16:18.375383 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 26 07:16:18.387291 kernel: usbcore: registered new interface driver hub Jun 26 07:16:18.395319 kernel: usbcore: registered new device driver usb Jun 26 07:16:18.425842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:16:18.426073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:16:18.435660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:16:18.437968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:18.481855 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (457) Jun 26 07:16:18.481917 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jun 26 07:16:18.438309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:18.443837 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:18.468722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:18.542935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 26 07:16:18.610789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:18.624908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:16:18.638884 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 26 07:16:18.639378 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 26 07:16:18.639596 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 26 07:16:18.639784 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 26 07:16:18.640061 kernel: hub 1-0:1.0: USB hub found Jun 26 07:16:18.640345 kernel: hub 1-0:1.0: 2 ports detected Jun 26 07:16:18.650829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 26 07:16:18.657966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 26 07:16:18.661320 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 26 07:16:18.682879 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 26 07:16:18.731355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 26 07:16:18.746522 disk-uuid[540]: Primary Header is updated. Jun 26 07:16:18.746522 disk-uuid[540]: Secondary Entries is updated. Jun 26 07:16:18.746522 disk-uuid[540]: Secondary Header is updated. Jun 26 07:16:18.762022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:16:18.773294 kernel: GPT:disk_guids don't match. Jun 26 07:16:18.773517 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 26 07:16:18.776185 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:16:18.785171 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:16:18.788447 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:16:19.805342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 26 07:16:19.810850 disk-uuid[541]: The operation has completed successfully. Jun 26 07:16:19.995291 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 26 07:16:19.995529 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 26 07:16:20.004623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 26 07:16:20.019644 sh[562]: Success Jun 26 07:16:20.054837 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 26 07:16:20.203063 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 26 07:16:20.246793 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 26 07:16:20.248352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 26 07:16:20.312658 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 26 07:16:20.312779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:16:20.316420 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 26 07:16:20.322796 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 26 07:16:20.322930 kernel: BTRFS info (device dm-0): using free space tree Jun 26 07:16:20.357697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 26 07:16:20.367571 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 26 07:16:20.395799 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 26 07:16:20.402549 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 26 07:16:20.428461 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:16:20.428549 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:16:20.432177 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:16:20.480296 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:16:20.544635 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 26 07:16:20.546858 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:16:20.603703 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 26 07:16:20.617688 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 26 07:16:20.884771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:16:20.900624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:16:20.972383 ignition[684]: Ignition 2.19.0 Jun 26 07:16:20.972403 ignition[684]: Stage: fetch-offline Jun 26 07:16:20.974887 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:16:20.972514 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:20.972531 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:20.972741 ignition[684]: parsed url from cmdline: "" Jun 26 07:16:20.972748 ignition[684]: no config URL provided Jun 26 07:16:20.972757 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:16:20.972771 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:16:20.972780 ignition[684]: failed to fetch config: resource requires networking Jun 26 07:16:20.973189 ignition[684]: Ignition finished successfully Jun 26 07:16:20.996787 systemd-networkd[751]: lo: Link UP Jun 26 07:16:20.996813 systemd-networkd[751]: lo: Gained carrier Jun 26 07:16:21.000962 systemd-networkd[751]: Enumeration completed Jun 26 07:16:21.001722 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:16:21.001728 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 26 07:16:21.002902 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:16:21.004200 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:16:21.004210 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 26 07:16:21.005482 systemd[1]: Reached target network.target - Network. Jun 26 07:16:21.023696 systemd-networkd[751]: eth0: Link UP Jun 26 07:16:21.023704 systemd-networkd[751]: eth0: Gained carrier Jun 26 07:16:21.023727 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 26 07:16:21.038974 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 26 07:16:21.041482 systemd-networkd[751]: eth1: Link UP Jun 26 07:16:21.041489 systemd-networkd[751]: eth1: Gained carrier Jun 26 07:16:21.041510 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 26 07:16:21.096881 ignition[757]: Ignition 2.19.0 Jun 26 07:16:21.096900 ignition[757]: Stage: fetch Jun 26 07:16:21.098753 systemd-networkd[751]: eth0: DHCPv4 address 146.190.154.167/20, gateway 146.190.144.1 acquired from 169.254.169.253 Jun 26 07:16:21.097376 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:21.097395 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:21.097544 ignition[757]: parsed url from cmdline: "" Jun 26 07:16:21.097553 ignition[757]: no config URL provided Jun 26 07:16:21.097561 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jun 26 07:16:21.097573 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jun 26 07:16:21.097607 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 26 07:16:21.097926 ignition[757]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 26 07:16:21.122115 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.24/20 acquired from 169.254.169.253 Jun 26 07:16:21.298275 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Jun 26 07:16:21.345865 ignition[757]: GET result: OK Jun 26 07:16:21.347284 ignition[757]: parsing config with SHA512: f9c9a56639d6acefc59706fe09058982f3ff0785691fa50d29a3be6f7262e93303ac8e6f3752fb7b70f58e2a00e1ac71fd79a1d6c478f218e3701393abdc1f78 Jun 26 07:16:21.353507 unknown[757]: fetched base config from "system" Jun 26 07:16:21.354588 unknown[757]: fetched base config from "system" Jun 26 07:16:21.355332 unknown[757]: fetched user config from "digitalocean" Jun 26 07:16:21.355943 ignition[757]: fetch: fetch complete Jun 26 07:16:21.355951 ignition[757]: fetch: fetch passed Jun 26 07:16:21.356046 ignition[757]: Ignition finished successfully Jun 26 07:16:21.370796 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 26 07:16:21.378720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 26 07:16:21.425572 ignition[765]: Ignition 2.19.0 Jun 26 07:16:21.425600 ignition[765]: Stage: kargs Jun 26 07:16:21.425957 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:21.425977 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:21.427943 ignition[765]: kargs: kargs passed Jun 26 07:16:21.428066 ignition[765]: Ignition finished successfully Jun 26 07:16:21.434266 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 26 07:16:21.475259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 26 07:16:21.522517 ignition[772]: Ignition 2.19.0 Jun 26 07:16:21.522639 ignition[772]: Stage: disks Jun 26 07:16:21.523253 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:21.523279 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:21.538373 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 26 07:16:21.525297 ignition[772]: disks: disks passed Jun 26 07:16:21.540722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 26 07:16:21.525407 ignition[772]: Ignition finished successfully Jun 26 07:16:21.541712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 26 07:16:21.546392 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:16:21.547839 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:16:21.548738 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:16:21.562843 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 26 07:16:21.608775 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 26 07:16:21.621250 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 26 07:16:21.628007 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 26 07:16:21.893102 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 26 07:16:21.889431 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 26 07:16:21.891476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 26 07:16:21.911390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:16:21.917392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 26 07:16:21.950070 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Jun 26 07:16:21.950124 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:16:21.950167 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:16:21.950191 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:16:21.950213 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:16:21.968539 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 26 07:16:21.974476 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 26 07:16:21.981113 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 26 07:16:21.983086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:16:21.994413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:16:21.997578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 26 07:16:22.004309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 26 07:16:22.170478 coreos-metadata[807]: Jun 26 07:16:22.169 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:22.191509 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jun 26 07:16:22.196887 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jun 26 07:16:22.203735 coreos-metadata[807]: Jun 26 07:16:22.203 INFO Fetch successful Jun 26 07:16:22.216894 coreos-metadata[807]: Jun 26 07:16:22.216 INFO wrote hostname ci-4012.0.0-0-d66a9e5a9c to /sysroot/etc/hostname Jun 26 07:16:22.220523 coreos-metadata[806]: Jun 26 07:16:22.219 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:22.218423 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:16:22.226695 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jun 26 07:16:22.230376 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jun 26 07:16:22.245199 coreos-metadata[806]: Jun 26 07:16:22.243 INFO Fetch successful Jun 26 07:16:22.252718 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 26 07:16:22.252941 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 26 07:16:22.344850 systemd-networkd[751]: eth0: Gained IPv6LL Jun 26 07:16:22.513533 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 26 07:16:22.521374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 26 07:16:22.525416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 26 07:16:22.549980 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:16:22.549288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 26 07:16:22.624676 ignition[911]: INFO : Ignition 2.19.0 Jun 26 07:16:22.626060 ignition[911]: INFO : Stage: mount Jun 26 07:16:22.628954 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:22.628954 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:22.633643 ignition[911]: INFO : mount: mount passed Jun 26 07:16:22.633643 ignition[911]: INFO : Ignition finished successfully Jun 26 07:16:22.638204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 26 07:16:22.639824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 26 07:16:22.650612 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 26 07:16:22.673621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 26 07:16:22.721057 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jun 26 07:16:22.733786 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 26 07:16:22.733896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 26 07:16:22.733927 kernel: BTRFS info (device vda6): using free space tree Jun 26 07:16:22.731495 systemd-networkd[751]: eth1: Gained IPv6LL Jun 26 07:16:22.752883 kernel: BTRFS info (device vda6): auto enabling async discard Jun 26 07:16:22.761963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 26 07:16:22.825949 ignition[941]: INFO : Ignition 2.19.0 Jun 26 07:16:22.828503 ignition[941]: INFO : Stage: files Jun 26 07:16:22.828503 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:22.828503 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:22.832375 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jun 26 07:16:22.838853 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 26 07:16:22.838853 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 26 07:16:22.870892 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 26 07:16:22.873078 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 26 07:16:22.878807 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 26 07:16:22.873428 unknown[941]: wrote ssh authorized keys file for user: core Jun 26 07:16:22.951583 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:16:22.965861 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 26 07:16:23.150306 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 26 07:16:23.333768 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 26 07:16:23.333768 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 26 07:16:23.333768 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 26 07:16:23.858590 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 26 07:16:24.094956 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 26 07:16:24.094956 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 26 07:16:24.110063 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:16:24.145765 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 26 07:16:24.145765 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:16:24.145765 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:16:24.145765 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:16:24.145765 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 26 07:16:24.517716 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 26 07:16:25.209972 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 26 07:16:25.209972 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 26 07:16:25.215872 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:16:25.215872 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 26 07:16:25.215872 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 26 07:16:25.215872 ignition[941]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 26 07:16:25.223605 ignition[941]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 26 07:16:25.223605 ignition[941]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:16:25.223605 ignition[941]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 26 07:16:25.223605 ignition[941]: INFO : files: files passed Jun 26 07:16:25.223605 ignition[941]: INFO : Ignition finished successfully Jun 26 07:16:25.223541 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 26 07:16:25.235609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 26 07:16:25.248410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 26 07:16:25.279263 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 26 07:16:25.279439 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 26 07:16:25.302362 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:25.304934 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:25.306851 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 26 07:16:25.307615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:16:25.312762 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 26 07:16:25.321763 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 26 07:16:25.412463 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 26 07:16:25.412713 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 26 07:16:25.425614 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 26 07:16:25.426997 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 26 07:16:25.428049 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 26 07:16:25.437776 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 26 07:16:25.492838 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:16:25.520772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 26 07:16:25.553839 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:16:25.556828 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:16:25.560082 systemd[1]: Stopped target timers.target - Timer Units. Jun 26 07:16:25.561179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 26 07:16:25.561545 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 26 07:16:25.563043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 26 07:16:25.564084 systemd[1]: Stopped target basic.target - Basic System. Jun 26 07:16:25.565063 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 26 07:16:25.566013 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 26 07:16:25.567025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 26 07:16:25.568183 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 26 07:16:25.569223 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 26 07:16:25.570299 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 26 07:16:25.585765 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 26 07:16:25.595036 systemd[1]: Stopped target swap.target - Swaps. Jun 26 07:16:25.596055 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 26 07:16:25.596306 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 26 07:16:25.597686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:16:25.598911 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:16:25.602010 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 26 07:16:25.602220 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:16:25.603512 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 26 07:16:25.603799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 26 07:16:25.605288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 26 07:16:25.605597 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 26 07:16:25.606851 systemd[1]: ignition-files.service: Deactivated successfully. Jun 26 07:16:25.607065 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 26 07:16:25.608053 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 26 07:16:25.608340 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 26 07:16:25.636878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 26 07:16:25.639811 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 26 07:16:25.640213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:16:25.654200 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 26 07:16:25.671598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 26 07:16:25.672041 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:16:25.674280 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 26 07:16:25.675636 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 26 07:16:25.688890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 26 07:16:25.691205 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 26 07:16:25.726543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 26 07:16:25.728971 ignition[994]: INFO : Ignition 2.19.0 Jun 26 07:16:25.728971 ignition[994]: INFO : Stage: umount Jun 26 07:16:25.739044 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 26 07:16:25.739044 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 26 07:16:25.739044 ignition[994]: INFO : umount: umount passed Jun 26 07:16:25.739044 ignition[994]: INFO : Ignition finished successfully Jun 26 07:16:25.751429 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 26 07:16:25.751708 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 26 07:16:25.763922 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 26 07:16:25.764086 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 26 07:16:25.768854 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 26 07:16:25.769016 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 26 07:16:25.772428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 26 07:16:25.772579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 26 07:16:25.778016 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 26 07:16:25.778173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 26 07:16:25.780010 systemd[1]: Stopped target network.target - Network. Jun 26 07:16:25.780804 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 26 07:16:25.780921 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 26 07:16:25.781844 systemd[1]: Stopped target paths.target - Path Units. Jun 26 07:16:25.789588 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 26 07:16:25.795220 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:16:25.796341 systemd[1]: Stopped target slices.target - Slice Units. Jun 26 07:16:25.801271 systemd[1]: Stopped target sockets.target - Socket Units. Jun 26 07:16:25.802247 systemd[1]: iscsid.socket: Deactivated successfully. Jun 26 07:16:25.802333 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 26 07:16:25.803552 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 26 07:16:25.803643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 26 07:16:25.807495 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 26 07:16:25.807628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 26 07:16:25.811859 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 26 07:16:25.812026 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 26 07:16:25.813987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 26 07:16:25.814094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 26 07:16:25.818754 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 26 07:16:25.821588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 26 07:16:25.832007 systemd-networkd[751]: eth1: DHCPv6 lease lost Jun 26 07:16:25.837644 systemd-networkd[751]: eth0: DHCPv6 lease lost Jun 26 07:16:25.845998 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 26 07:16:25.846563 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 26 07:16:25.851977 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 26 07:16:25.853588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 26 07:16:25.865643 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 26 07:16:25.865803 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:16:25.894872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 26 07:16:25.895694 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 26 07:16:25.895848 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 26 07:16:25.902732 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:16:25.902846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:16:25.905728 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 26 07:16:25.905848 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 26 07:16:25.911203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 26 07:16:25.911319 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:16:25.912801 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:16:25.950213 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 26 07:16:25.950637 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:16:25.961653 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 26 07:16:25.961840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 26 07:16:25.963999 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 26 07:16:25.964115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 26 07:16:25.965685 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 26 07:16:25.965788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:16:25.966729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 26 07:16:25.966816 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 26 07:16:25.967822 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 26 07:16:25.968112 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 26 07:16:25.974894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 26 07:16:25.975044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 26 07:16:26.004155 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 26 07:16:26.005579 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 26 07:16:26.005755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:16:26.011281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:26.011383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:26.015818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 26 07:16:26.016003 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 26 07:16:26.041664 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 26 07:16:26.053825 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 26 07:16:26.074125 systemd[1]: Switching root. Jun 26 07:16:26.211080 systemd-journald[184]: Journal stopped Jun 26 07:16:28.851034 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 26 07:16:28.851206 kernel: SELinux: policy capability network_peer_controls=1 Jun 26 07:16:28.851238 kernel: SELinux: policy capability open_perms=1 Jun 26 07:16:28.851256 kernel: SELinux: policy capability extended_socket_class=1 Jun 26 07:16:28.851274 kernel: SELinux: policy capability always_check_network=0 Jun 26 07:16:28.851290 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 26 07:16:28.851308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 26 07:16:28.851332 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 26 07:16:28.851350 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 26 07:16:28.851376 kernel: audit: type=1403 audit(1719386186.701:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 26 07:16:28.851396 systemd[1]: Successfully loaded SELinux policy in 73.648ms. Jun 26 07:16:28.851428 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.943ms. Jun 26 07:16:28.851449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 26 07:16:28.851468 systemd[1]: Detected virtualization kvm. Jun 26 07:16:28.851492 systemd[1]: Detected architecture x86-64. Jun 26 07:16:28.851514 systemd[1]: Detected first boot. Jun 26 07:16:28.851538 systemd[1]: Hostname set to . Jun 26 07:16:28.851557 systemd[1]: Initializing machine ID from VM UUID. Jun 26 07:16:28.851579 zram_generator::config[1038]: No configuration found. Jun 26 07:16:28.851599 systemd[1]: Populated /etc with preset unit settings. Jun 26 07:16:28.851617 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 26 07:16:28.851673 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 26 07:16:28.851693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 26 07:16:28.851713 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 26 07:16:28.851734 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 26 07:16:28.851754 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 26 07:16:28.851777 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 26 07:16:28.851796 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 26 07:16:28.851813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 26 07:16:28.851832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 26 07:16:28.851850 systemd[1]: Created slice user.slice - User and Session Slice. Jun 26 07:16:28.851869 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 26 07:16:28.851888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 26 07:16:28.851906 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 26 07:16:28.851927 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 26 07:16:28.851947 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 26 07:16:28.851965 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 26 07:16:28.851984 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 26 07:16:28.852002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 26 07:16:28.852021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 26 07:16:28.852040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 26 07:16:28.852063 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 26 07:16:28.852081 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 26 07:16:28.852099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 26 07:16:28.852119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 26 07:16:28.875111 systemd[1]: Reached target slices.target - Slice Units. Jun 26 07:16:28.875199 systemd[1]: Reached target swap.target - Swaps. Jun 26 07:16:28.875227 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 26 07:16:28.875253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 26 07:16:28.875278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 26 07:16:28.875328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 26 07:16:28.875365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 26 07:16:28.875400 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 26 07:16:28.875435 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 26 07:16:28.875469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 26 07:16:28.875507 systemd[1]: Mounting media.mount - External Media Directory... Jun 26 07:16:28.875542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:28.875578 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 26 07:16:28.875601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 26 07:16:28.875631 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 26 07:16:28.875655 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 26 07:16:28.875674 systemd[1]: Reached target machines.target - Containers. Jun 26 07:16:28.875693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 26 07:16:28.875713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:28.875738 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 26 07:16:28.875766 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 26 07:16:28.875790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:28.875841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:16:28.875863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:28.875903 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 26 07:16:28.875924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:28.875950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 26 07:16:28.875976 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 26 07:16:28.875998 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 26 07:16:28.876019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 26 07:16:28.876045 systemd[1]: Stopped systemd-fsck-usr.service. Jun 26 07:16:28.876065 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 26 07:16:28.876084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 26 07:16:28.876103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 26 07:16:28.876123 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 26 07:16:28.876318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 26 07:16:28.876353 systemd[1]: verity-setup.service: Deactivated successfully. Jun 26 07:16:28.876374 systemd[1]: Stopped verity-setup.service. Jun 26 07:16:28.876396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:28.876429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 26 07:16:28.876452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 26 07:16:28.876478 kernel: loop: module loaded Jun 26 07:16:28.876501 systemd[1]: Mounted media.mount - External Media Directory. Jun 26 07:16:28.876522 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 26 07:16:28.876606 systemd-journald[1110]: Collecting audit messages is disabled. Jun 26 07:16:28.876654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 26 07:16:28.876679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 26 07:16:28.876704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 26 07:16:28.876729 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 26 07:16:28.876754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 26 07:16:28.876780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:28.876811 systemd-journald[1110]: Journal started Jun 26 07:16:28.876857 systemd-journald[1110]: Runtime Journal (/run/log/journal/7e2f0b271ae544678abec559dee99a3e) is 4.9M, max 39.3M, 34.4M free. Jun 26 07:16:28.121579 systemd[1]: Queued start job for default target multi-user.target. Jun 26 07:16:28.888879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:28.888974 systemd[1]: Started systemd-journald.service - Journal Service. Jun 26 07:16:28.182923 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 26 07:16:28.183864 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 26 07:16:28.892099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:28.894294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:28.895909 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:28.897267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:28.900624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 26 07:16:28.904514 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 26 07:16:28.907495 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 26 07:16:28.947213 kernel: fuse: init (API version 7.39) Jun 26 07:16:28.958034 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 26 07:16:28.976364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 26 07:16:28.978382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 26 07:16:28.978483 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 26 07:16:28.984091 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 26 07:16:29.001450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 26 07:16:29.010684 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 26 07:16:29.011911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:29.030884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 26 07:16:29.041543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 26 07:16:29.043630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:29.050952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 26 07:16:29.053353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:29.063117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:16:29.099581 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 26 07:16:29.109088 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 26 07:16:29.114176 kernel: ACPI: bus type drm_connector registered Jun 26 07:16:29.120610 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:16:29.121323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:16:29.125429 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 26 07:16:29.127962 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 26 07:16:29.130346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 26 07:16:29.134171 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 26 07:16:29.140367 systemd-journald[1110]: Time spent on flushing to /var/log/journal/7e2f0b271ae544678abec559dee99a3e is 187.368ms for 991 entries. Jun 26 07:16:29.140367 systemd-journald[1110]: System Journal (/var/log/journal/7e2f0b271ae544678abec559dee99a3e) is 8.0M, max 195.6M, 187.6M free. Jun 26 07:16:29.380421 systemd-journald[1110]: Received client request to flush runtime journal. Jun 26 07:16:29.380541 kernel: loop0: detected capacity change from 0 to 80568 Jun 26 07:16:29.380598 kernel: block loop0: the capability attribute has been deprecated. Jun 26 07:16:29.380777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 26 07:16:29.163418 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 26 07:16:29.178587 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 26 07:16:29.193361 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 26 07:16:29.245255 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 26 07:16:29.250006 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 26 07:16:29.259525 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 26 07:16:29.309275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 26 07:16:29.331533 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 26 07:16:29.372036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:16:29.388763 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 26 07:16:29.399765 kernel: loop1: detected capacity change from 0 to 139760 Jun 26 07:16:29.430332 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 26 07:16:29.434383 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 26 07:16:29.443086 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 26 07:16:29.455338 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 26 07:16:29.468096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 26 07:16:29.479981 kernel: loop2: detected capacity change from 0 to 211296 Jun 26 07:16:29.625533 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 26 07:16:29.627144 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 26 07:16:29.667816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 26 07:16:29.675172 kernel: loop3: detected capacity change from 0 to 8 Jun 26 07:16:29.780228 kernel: loop4: detected capacity change from 0 to 80568 Jun 26 07:16:29.844287 kernel: loop5: detected capacity change from 0 to 139760 Jun 26 07:16:29.906056 kernel: loop6: detected capacity change from 0 to 211296 Jun 26 07:16:29.971333 kernel: loop7: detected capacity change from 0 to 8 Jun 26 07:16:29.972088 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 26 07:16:29.973065 (sd-merge)[1182]: Merged extensions into '/usr'. Jun 26 07:16:29.985230 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jun 26 07:16:29.985256 systemd[1]: Reloading... Jun 26 07:16:30.246223 zram_generator::config[1203]: No configuration found. Jun 26 07:16:30.689569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:30.869269 systemd[1]: Reloading finished in 878 ms. Jun 26 07:16:30.918177 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 26 07:16:30.929419 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 26 07:16:30.931214 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 26 07:16:30.944688 systemd[1]: Starting ensure-sysext.service... Jun 26 07:16:30.955508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 26 07:16:30.988433 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jun 26 07:16:30.988469 systemd[1]: Reloading... Jun 26 07:16:31.072385 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 26 07:16:31.073077 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 26 07:16:31.077078 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 26 07:16:31.079802 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 26 07:16:31.079913 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jun 26 07:16:31.094732 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:16:31.094751 systemd-tmpfiles[1250]: Skipping /boot Jun 26 07:16:31.150300 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jun 26 07:16:31.150394 systemd-tmpfiles[1250]: Skipping /boot Jun 26 07:16:31.242170 zram_generator::config[1275]: No configuration found. Jun 26 07:16:31.557111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:16:31.684469 systemd[1]: Reloading finished in 695 ms. Jun 26 07:16:31.707930 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 26 07:16:31.722266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 26 07:16:31.745895 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:31.750914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 26 07:16:31.759814 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 26 07:16:31.772969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 26 07:16:31.787727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 26 07:16:31.798786 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 26 07:16:31.809160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.809630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:31.821715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:31.830839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:31.845652 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:31.848289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:31.848556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.857848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 26 07:16:31.862013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.863517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:31.863875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:31.864115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.872254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.872808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:31.882759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 26 07:16:31.884593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:31.884974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:31.892049 systemd[1]: Finished ensure-sysext.service. Jun 26 07:16:31.894769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:31.895016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:31.901092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:31.901367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:31.905063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:31.916652 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 26 07:16:31.930207 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 26 07:16:31.952660 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 26 07:16:31.959602 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 26 07:16:31.970184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:31.970675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:31.972938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:31.976332 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 26 07:16:31.976666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 26 07:16:31.983527 augenrules[1354]: No rules Jun 26 07:16:31.990091 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:32.026505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 26 07:16:32.035771 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jun 26 07:16:32.060948 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 26 07:16:32.076801 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 26 07:16:32.079655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:16:32.111088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 26 07:16:32.129476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 26 07:16:32.320630 systemd-resolved[1324]: Positive Trust Anchors: Jun 26 07:16:32.323630 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 26 07:16:32.323763 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 26 07:16:32.332755 systemd-networkd[1373]: lo: Link UP Jun 26 07:16:32.332768 systemd-networkd[1373]: lo: Gained carrier Jun 26 07:16:32.333983 systemd-networkd[1373]: Enumeration completed Jun 26 07:16:32.334229 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 26 07:16:32.343223 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 26 07:16:32.346749 systemd-resolved[1324]: Using system hostname 'ci-4012.0.0-0-d66a9e5a9c'. Jun 26 07:16:32.351881 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 26 07:16:32.353374 systemd[1]: Reached target time-set.target - System Time Set. Jun 26 07:16:32.357695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 26 07:16:32.359333 systemd[1]: Reached target network.target - Network. Jun 26 07:16:32.361028 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 26 07:16:32.401158 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1382) Jun 26 07:16:32.428449 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 26 07:16:32.457601 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 26 07:16:32.460188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1381) Jun 26 07:16:32.461004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:32.461312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 26 07:16:32.474562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 26 07:16:32.486792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 26 07:16:32.537982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 26 07:16:32.540419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 26 07:16:32.540513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 26 07:16:32.540584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 26 07:16:32.550621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 26 07:16:32.552206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 26 07:16:32.564462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 26 07:16:32.565304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 26 07:16:32.575325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 26 07:16:32.575583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 26 07:16:32.639780 systemd-networkd[1373]: eth1: Configuring with /run/systemd/network/10-9e:ec:07:5b:fe:28.network. Jun 26 07:16:32.642726 systemd-networkd[1373]: eth0: Configuring with /run/systemd/network/10-2e:29:8e:d0:fe:c7.network. Jun 26 07:16:32.643082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 26 07:16:32.643224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 26 07:16:32.645944 systemd-networkd[1373]: eth1: Link UP Jun 26 07:16:32.646186 systemd-networkd[1373]: eth1: Gained carrier Jun 26 07:16:32.652084 systemd-networkd[1373]: eth0: Link UP Jun 26 07:16:32.652100 systemd-networkd[1373]: eth0: Gained carrier Jun 26 07:16:32.658840 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jun 26 07:16:32.688754 kernel: ISO 9660 Extensions: RRIP_1991A Jun 26 07:16:32.690673 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 26 07:16:33.222387 systemd-timesyncd[1348]: Contacted time server 216.229.4.66:123 (0.flatcar.pool.ntp.org). Jun 26 07:16:33.222481 systemd-timesyncd[1348]: Initial clock synchronization to Wed 2024-06-26 07:16:33.222246 UTC. Jun 26 07:16:33.222582 systemd-resolved[1324]: Clock change detected. Flushing caches. Jun 26 07:16:33.256500 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 26 07:16:33.294137 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 26 07:16:33.294630 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 26 07:16:33.297074 kernel: ACPI: button: Power Button [PWRF] Jun 26 07:16:33.376610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:33.417067 kernel: mousedev: PS/2 mouse device common for all mice Jun 26 07:16:33.424875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 26 07:16:33.445018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 26 07:16:33.510181 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 26 07:16:33.594093 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 26 07:16:33.600088 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 26 07:16:33.619066 kernel: Console: switching to colour dummy device 80x25 Jun 26 07:16:33.622725 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 26 07:16:33.622872 kernel: [drm] features: -context_init Jun 26 07:16:33.631149 kernel: [drm] number of scanouts: 1 Jun 26 07:16:33.631281 kernel: [drm] number of cap sets: 0 Jun 26 07:16:33.631361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:33.634406 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 26 07:16:33.642407 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 26 07:16:33.642547 kernel: Console: switching to colour frame buffer device 128x48 Jun 26 07:16:33.646252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:33.647170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:33.651075 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 26 07:16:33.652995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:33.666522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:33.683737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 26 07:16:33.685588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:33.699463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 26 07:16:33.730087 kernel: EDAC MC: Ver: 3.0.0 Jun 26 07:16:33.773972 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 26 07:16:33.787606 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 26 07:16:33.788698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 26 07:16:33.815499 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:16:33.864783 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 26 07:16:33.869002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 26 07:16:33.870607 systemd[1]: Reached target sysinit.target - System Initialization. Jun 26 07:16:33.871706 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 26 07:16:33.873356 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 26 07:16:33.873945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 26 07:16:33.874298 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 26 07:16:33.874422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 26 07:16:33.874539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 26 07:16:33.874599 systemd[1]: Reached target paths.target - Path Units. Jun 26 07:16:33.874684 systemd[1]: Reached target timers.target - Timer Units. Jun 26 07:16:33.877694 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 26 07:16:33.880578 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 26 07:16:33.893762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 26 07:16:33.898808 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 26 07:16:33.914702 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 26 07:16:33.916744 systemd[1]: Reached target sockets.target - Socket Units. Jun 26 07:16:33.919537 systemd[1]: Reached target basic.target - Basic System. Jun 26 07:16:33.920642 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:16:33.920703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 26 07:16:33.924479 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 26 07:16:33.931417 systemd[1]: Starting containerd.service - containerd container runtime... Jun 26 07:16:33.946382 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 26 07:16:33.957230 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 26 07:16:33.969800 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 26 07:16:33.983523 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 26 07:16:33.984548 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 26 07:16:33.993470 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 26 07:16:34.005285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 26 07:16:34.019338 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 26 07:16:34.031871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 26 07:16:34.049853 coreos-metadata[1438]: Jun 26 07:16:34.048 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:34.049793 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 26 07:16:34.054872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 26 07:16:34.056745 jq[1440]: false Jun 26 07:16:34.057442 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 26 07:16:34.079440 coreos-metadata[1438]: Jun 26 07:16:34.064 INFO Fetch successful Jun 26 07:16:34.060569 systemd[1]: Starting update-engine.service - Update Engine... Jun 26 07:16:34.074260 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 26 07:16:34.080844 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 26 07:16:34.101891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 26 07:16:34.103490 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 26 07:16:34.105232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 26 07:16:34.106711 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 26 07:16:34.126789 update_engine[1449]: I0626 07:16:34.120250 1449 main.cc:92] Flatcar Update Engine starting Jun 26 07:16:34.145791 dbus-daemon[1439]: [system] SELinux support is enabled Jun 26 07:16:34.160665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 26 07:16:34.173498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 26 07:16:34.186657 tar[1454]: linux-amd64/helm Jun 26 07:16:34.174809 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 26 07:16:34.175915 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 26 07:16:34.176076 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 26 07:16:34.176112 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 26 07:16:34.199194 update_engine[1449]: I0626 07:16:34.192494 1449 update_check_scheduler.cc:74] Next update check in 7m42s Jun 26 07:16:34.212345 systemd[1]: Started update-engine.service - Update Engine. Jun 26 07:16:34.220333 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 26 07:16:34.225224 jq[1450]: true Jun 26 07:16:34.232439 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 26 07:16:34.257867 systemd-logind[1448]: New seat seat0. Jun 26 07:16:34.262853 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (Power Button) Jun 26 07:16:34.262905 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 26 07:16:34.263323 systemd[1]: Started systemd-logind.service - User Login Management. Jun 26 07:16:34.277767 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 26 07:16:34.295709 jq[1472]: true Jun 26 07:16:34.313058 extend-filesystems[1443]: Found loop4 Jun 26 07:16:34.328796 extend-filesystems[1443]: Found loop5 Jun 26 07:16:34.328796 extend-filesystems[1443]: Found loop6 Jun 26 07:16:34.328796 extend-filesystems[1443]: Found loop7 Jun 26 07:16:34.328796 extend-filesystems[1443]: Found vda Jun 26 07:16:34.328796 extend-filesystems[1443]: Found vda1 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda2 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda3 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found usr Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda4 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda6 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda7 Jun 26 07:16:34.365935 extend-filesystems[1443]: Found vda9 Jun 26 07:16:34.365935 extend-filesystems[1443]: Checking size of /dev/vda9 Jun 26 07:16:34.372690 systemd[1]: motdgen.service: Deactivated successfully. Jun 26 07:16:34.374305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 26 07:16:34.387627 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 26 07:16:34.404927 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 26 07:16:34.450639 extend-filesystems[1443]: Resized partition /dev/vda9 Jun 26 07:16:34.467081 extend-filesystems[1498]: resize2fs 1.47.0 (5-Feb-2023) Jun 26 07:16:34.479248 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 26 07:16:34.567567 systemd-networkd[1373]: eth0: Gained IPv6LL Jun 26 07:16:34.588583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1379) Jun 26 07:16:34.589787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 26 07:16:34.595674 systemd[1]: Reached target network-online.target - Network is Online. Jun 26 07:16:34.606427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:34.617998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 26 07:16:34.718403 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:16:34.720281 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 26 07:16:34.742615 systemd[1]: Starting sshkeys.service... Jun 26 07:16:34.756359 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 26 07:16:34.829107 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 26 07:16:34.820029 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 26 07:16:34.841163 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 26 07:16:34.884830 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 26 07:16:34.887333 systemd-networkd[1373]: eth1: Gained IPv6LL Jun 26 07:16:34.903446 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 26 07:16:34.936474 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 26 07:16:34.936474 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 26 07:16:34.936474 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 26 07:16:34.955357 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jun 26 07:16:34.955357 extend-filesystems[1443]: Found vdb Jun 26 07:16:34.940736 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 26 07:16:34.941897 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 26 07:16:34.968005 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 26 07:16:35.009778 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 26 07:16:35.028588 systemd[1]: Started sshd@0-146.190.154.167:22-147.75.109.163:53356.service - OpenSSH per-connection server daemon (147.75.109.163:53356). Jun 26 07:16:35.038495 coreos-metadata[1532]: Jun 26 07:16:35.034 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 26 07:16:35.075241 coreos-metadata[1532]: Jun 26 07:16:35.053 INFO Fetch successful Jun 26 07:16:35.088258 unknown[1532]: wrote ssh authorized keys file for user: core Jun 26 07:16:35.126195 systemd[1]: issuegen.service: Deactivated successfully. Jun 26 07:16:35.126568 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 26 07:16:35.147184 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 26 07:16:35.180094 update-ssh-keys[1544]: Updated "/home/core/.ssh/authorized_keys" Jun 26 07:16:35.170502 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 26 07:16:35.183705 systemd[1]: Finished sshkeys.service. Jun 26 07:16:35.260579 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 26 07:16:35.278705 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 26 07:16:35.297141 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 26 07:16:35.299751 systemd[1]: Reached target getty.target - Login Prompts. Jun 26 07:16:35.350509 sshd[1539]: Accepted publickey for core from 147.75.109.163 port 53356 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:35.354753 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:35.385186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 26 07:16:35.400781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 26 07:16:35.417485 systemd-logind[1448]: New session 1 of user core. Jun 26 07:16:35.458081 containerd[1466]: time="2024-06-26T07:16:35.457499593Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 26 07:16:35.478305 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 26 07:16:35.500721 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 26 07:16:35.527704 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:35.570972 containerd[1466]: time="2024-06-26T07:16:35.569643659Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 26 07:16:35.571676 containerd[1466]: time="2024-06-26T07:16:35.571425799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.580655 containerd[1466]: time="2024-06-26T07:16:35.580481808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.581640226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.581997646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582028008Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582212362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582288459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582305788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582402317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582681863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582705830Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582721954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583261 containerd[1466]: time="2024-06-26T07:16:35.582937341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 26 07:16:35.583865 containerd[1466]: time="2024-06-26T07:16:35.582960657Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 26 07:16:35.583865 containerd[1466]: time="2024-06-26T07:16:35.583058112Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 26 07:16:35.583865 containerd[1466]: time="2024-06-26T07:16:35.583078994Z" level=info msg="metadata content store policy set" policy=shared Jun 26 07:16:35.615017 containerd[1466]: time="2024-06-26T07:16:35.614662577Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 26 07:16:35.615313 containerd[1466]: time="2024-06-26T07:16:35.615283165Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 26 07:16:35.615409 containerd[1466]: time="2024-06-26T07:16:35.615393965Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 26 07:16:35.615798 containerd[1466]: time="2024-06-26T07:16:35.615771702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 26 07:16:35.616851 containerd[1466]: time="2024-06-26T07:16:35.615898581Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 26 07:16:35.616976 containerd[1466]: time="2024-06-26T07:16:35.616958702Z" level=info msg="NRI interface is disabled by configuration." Jun 26 07:16:35.617999 containerd[1466]: time="2024-06-26T07:16:35.617567905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 26 07:16:35.618644 containerd[1466]: time="2024-06-26T07:16:35.618616027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 26 07:16:35.618799 containerd[1466]: time="2024-06-26T07:16:35.618777998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 26 07:16:35.618903 containerd[1466]: time="2024-06-26T07:16:35.618884971Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 26 07:16:35.620970 containerd[1466]: time="2024-06-26T07:16:35.620931196Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 26 07:16:35.621164 containerd[1466]: time="2024-06-26T07:16:35.621135508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.622433 containerd[1466]: time="2024-06-26T07:16:35.622397995Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625110142Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625176343Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625213506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625239631Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625259507Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625279555Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625546263Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 26 07:16:35.628086 containerd[1466]: time="2024-06-26T07:16:35.625973730Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 26 07:16:35.631171 containerd[1466]: time="2024-06-26T07:16:35.626031661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.631396 containerd[1466]: time="2024-06-26T07:16:35.631368146Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 26 07:16:35.631572 containerd[1466]: time="2024-06-26T07:16:35.631549004Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 26 07:16:35.634079 containerd[1466]: time="2024-06-26T07:16:35.631765073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.634324 containerd[1466]: time="2024-06-26T07:16:35.634290039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.634423 containerd[1466]: time="2024-06-26T07:16:35.634408441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.634539 containerd[1466]: time="2024-06-26T07:16:35.634523270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634636291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634661048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634678381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634698026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634719735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634910357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634935390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634954515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.634988306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.635011075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.635071959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.635092686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640075 containerd[1466]: time="2024-06-26T07:16:35.635109882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 26 07:16:35.640813 containerd[1466]: time="2024-06-26T07:16:35.635478066Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 26 07:16:35.640813 containerd[1466]: time="2024-06-26T07:16:35.635584414Z" level=info msg="Connect containerd service" Jun 26 07:16:35.640813 containerd[1466]: time="2024-06-26T07:16:35.635630314Z" level=info msg="using legacy CRI server" Jun 26 07:16:35.640813 containerd[1466]: time="2024-06-26T07:16:35.635641090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 26 07:16:35.640813 containerd[1466]: time="2024-06-26T07:16:35.635760769Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 26 07:16:35.648080 containerd[1466]: time="2024-06-26T07:16:35.645624924Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:16:35.648471 containerd[1466]: time="2024-06-26T07:16:35.648418528Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 26 07:16:35.648845 containerd[1466]: time="2024-06-26T07:16:35.648808314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 26 07:16:35.651077 containerd[1466]: time="2024-06-26T07:16:35.648928739Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.651266210Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.651735210Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.651794110Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.649611242Z" level=info msg="Start subscribing containerd event" Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.651895082Z" level=info msg="Start recovering state" Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.651978640Z" level=info msg="Start event monitor" Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.652000107Z" level=info msg="Start snapshots syncer" Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.652014666Z" level=info msg="Start cni network conf syncer for default" Jun 26 07:16:35.654082 containerd[1466]: time="2024-06-26T07:16:35.652024684Z" level=info msg="Start streaming server" Jun 26 07:16:35.654819 systemd[1]: Started containerd.service - containerd container runtime. Jun 26 07:16:35.659133 containerd[1466]: time="2024-06-26T07:16:35.658337498Z" level=info msg="containerd successfully booted in 0.223666s" Jun 26 07:16:35.841559 systemd[1556]: Queued start job for default target default.target. Jun 26 07:16:35.848631 systemd[1556]: Created slice app.slice - User Application Slice. Jun 26 07:16:35.848694 systemd[1556]: Reached target paths.target - Paths. Jun 26 07:16:35.848735 systemd[1556]: Reached target timers.target - Timers. Jun 26 07:16:35.858294 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 26 07:16:35.891817 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 26 07:16:35.892550 systemd[1556]: Reached target sockets.target - Sockets. Jun 26 07:16:35.892584 systemd[1556]: Reached target basic.target - Basic System. Jun 26 07:16:35.892672 systemd[1556]: Reached target default.target - Main User Target. Jun 26 07:16:35.892724 systemd[1556]: Startup finished in 343ms. Jun 26 07:16:35.893234 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 26 07:16:35.905423 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 26 07:16:36.001531 systemd[1]: Started sshd@1-146.190.154.167:22-147.75.109.163:45802.service - OpenSSH per-connection server daemon (147.75.109.163:45802). Jun 26 07:16:36.113721 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 45802 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:36.118198 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:36.138615 systemd-logind[1448]: New session 2 of user core. Jun 26 07:16:36.148573 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 26 07:16:36.235027 sshd[1570]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:36.238915 tar[1454]: linux-amd64/LICENSE Jun 26 07:16:36.243343 tar[1454]: linux-amd64/README.md Jun 26 07:16:36.257511 systemd[1]: sshd@1-146.190.154.167:22-147.75.109.163:45802.service: Deactivated successfully. Jun 26 07:16:36.265606 systemd[1]: session-2.scope: Deactivated successfully. Jun 26 07:16:36.275523 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jun 26 07:16:36.281778 systemd[1]: Started sshd@2-146.190.154.167:22-147.75.109.163:45816.service - OpenSSH per-connection server daemon (147.75.109.163:45816). Jun 26 07:16:36.287215 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 26 07:16:36.295887 systemd-logind[1448]: Removed session 2. Jun 26 07:16:36.353393 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 45816 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:36.355966 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:36.367238 systemd-logind[1448]: New session 3 of user core. Jun 26 07:16:36.373496 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 26 07:16:36.459209 sshd[1579]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:36.466920 systemd[1]: sshd@2-146.190.154.167:22-147.75.109.163:45816.service: Deactivated successfully. Jun 26 07:16:36.470254 systemd[1]: session-3.scope: Deactivated successfully. Jun 26 07:16:36.474878 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jun 26 07:16:36.478010 systemd-logind[1448]: Removed session 3. Jun 26 07:16:37.138146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:37.142078 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 26 07:16:37.147314 systemd[1]: Startup finished in 1.931s (kernel) + 10.791s (initrd) + 10.009s (userspace) = 22.733s. Jun 26 07:16:37.170655 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:38.539062 kubelet[1591]: E0626 07:16:38.538854 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:38.544634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:38.544939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:16:38.545582 systemd[1]: kubelet.service: Consumed 1.737s CPU time. Jun 26 07:16:46.483629 systemd[1]: Started sshd@3-146.190.154.167:22-147.75.109.163:50524.service - OpenSSH per-connection server daemon (147.75.109.163:50524). Jun 26 07:16:46.534252 sshd[1604]: Accepted publickey for core from 147.75.109.163 port 50524 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:46.536979 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:46.546170 systemd-logind[1448]: New session 4 of user core. Jun 26 07:16:46.558488 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 26 07:16:46.630440 sshd[1604]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:46.649400 systemd[1]: sshd@3-146.190.154.167:22-147.75.109.163:50524.service: Deactivated successfully. Jun 26 07:16:46.652934 systemd[1]: session-4.scope: Deactivated successfully. Jun 26 07:16:46.657001 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jun 26 07:16:46.669590 systemd[1]: Started sshd@4-146.190.154.167:22-147.75.109.163:50536.service - OpenSSH per-connection server daemon (147.75.109.163:50536). Jun 26 07:16:46.671582 systemd-logind[1448]: Removed session 4. Jun 26 07:16:46.725408 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 50536 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:46.727736 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:46.736928 systemd-logind[1448]: New session 5 of user core. Jun 26 07:16:46.747516 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 26 07:16:46.811367 sshd[1611]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:46.832439 systemd[1]: sshd@4-146.190.154.167:22-147.75.109.163:50536.service: Deactivated successfully. Jun 26 07:16:46.835529 systemd[1]: session-5.scope: Deactivated successfully. Jun 26 07:16:46.838649 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jun 26 07:16:46.848832 systemd[1]: Started sshd@5-146.190.154.167:22-147.75.109.163:50544.service - OpenSSH per-connection server daemon (147.75.109.163:50544). Jun 26 07:16:46.851338 systemd-logind[1448]: Removed session 5. Jun 26 07:16:46.922297 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 50544 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:46.925149 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:46.933948 systemd-logind[1448]: New session 6 of user core. Jun 26 07:16:46.942563 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 26 07:16:47.022170 sshd[1618]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:47.040327 systemd[1]: sshd@5-146.190.154.167:22-147.75.109.163:50544.service: Deactivated successfully. Jun 26 07:16:47.045835 systemd[1]: session-6.scope: Deactivated successfully. Jun 26 07:16:47.049333 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jun 26 07:16:47.064106 systemd[1]: Started sshd@6-146.190.154.167:22-147.75.109.163:50546.service - OpenSSH per-connection server daemon (147.75.109.163:50546). Jun 26 07:16:47.067306 systemd-logind[1448]: Removed session 6. Jun 26 07:16:47.126109 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 50546 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:47.129057 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:47.139588 systemd-logind[1448]: New session 7 of user core. Jun 26 07:16:47.145541 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 26 07:16:47.241843 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 26 07:16:47.242482 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:47.262599 sudo[1628]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:47.267543 sshd[1625]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:47.280309 systemd[1]: sshd@6-146.190.154.167:22-147.75.109.163:50546.service: Deactivated successfully. Jun 26 07:16:47.283595 systemd[1]: session-7.scope: Deactivated successfully. Jun 26 07:16:47.287326 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jun 26 07:16:47.293586 systemd[1]: Started sshd@7-146.190.154.167:22-147.75.109.163:50554.service - OpenSSH per-connection server daemon (147.75.109.163:50554). Jun 26 07:16:47.296273 systemd-logind[1448]: Removed session 7. Jun 26 07:16:47.359207 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 50554 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:47.361856 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:47.370472 systemd-logind[1448]: New session 8 of user core. Jun 26 07:16:47.381489 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 26 07:16:47.448262 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 26 07:16:47.448823 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:47.455319 sudo[1637]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:47.465393 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 26 07:16:47.465933 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:47.500648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:47.505313 auditctl[1640]: No rules Jun 26 07:16:47.506802 systemd[1]: audit-rules.service: Deactivated successfully. Jun 26 07:16:47.507867 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:47.518765 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 26 07:16:47.563378 augenrules[1658]: No rules Jun 26 07:16:47.566284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 26 07:16:47.569006 sudo[1636]: pam_unix(sudo:session): session closed for user root Jun 26 07:16:47.573695 sshd[1633]: pam_unix(sshd:session): session closed for user core Jun 26 07:16:47.588319 systemd[1]: sshd@7-146.190.154.167:22-147.75.109.163:50554.service: Deactivated successfully. Jun 26 07:16:47.591370 systemd[1]: session-8.scope: Deactivated successfully. Jun 26 07:16:47.594466 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jun 26 07:16:47.609185 systemd[1]: Started sshd@8-146.190.154.167:22-147.75.109.163:50560.service - OpenSSH per-connection server daemon (147.75.109.163:50560). Jun 26 07:16:47.611483 systemd-logind[1448]: Removed session 8. Jun 26 07:16:47.660335 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 50560 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:16:47.663142 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:16:47.671022 systemd-logind[1448]: New session 9 of user core. Jun 26 07:16:47.682395 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 26 07:16:47.749553 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 26 07:16:47.750032 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 26 07:16:47.974596 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 26 07:16:47.990844 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 26 07:16:48.610955 dockerd[1678]: time="2024-06-26T07:16:48.610128701Z" level=info msg="Starting up" Jun 26 07:16:48.612300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 26 07:16:48.622376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:48.852794 dockerd[1678]: time="2024-06-26T07:16:48.852469877Z" level=info msg="Loading containers: start." Jun 26 07:16:48.872426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:48.887481 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:49.083162 kubelet[1696]: E0626 07:16:49.081512 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:49.103679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:49.109776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:16:49.198164 kernel: Initializing XFRM netlink socket Jun 26 07:16:49.440470 systemd-networkd[1373]: docker0: Link UP Jun 26 07:16:49.499540 dockerd[1678]: time="2024-06-26T07:16:49.499326969Z" level=info msg="Loading containers: done." Jun 26 07:16:49.693686 dockerd[1678]: time="2024-06-26T07:16:49.692518259Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 26 07:16:49.693686 dockerd[1678]: time="2024-06-26T07:16:49.692950773Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 26 07:16:49.693686 dockerd[1678]: time="2024-06-26T07:16:49.693200424Z" level=info msg="Daemon has completed initialization" Jun 26 07:16:49.805919 dockerd[1678]: time="2024-06-26T07:16:49.803363036Z" level=info msg="API listen on /run/docker.sock" Jun 26 07:16:49.806762 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 26 07:16:51.603484 containerd[1466]: time="2024-06-26T07:16:51.602695855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 26 07:16:52.797716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155885297.mount: Deactivated successfully. Jun 26 07:16:57.300693 containerd[1466]: time="2024-06-26T07:16:57.293714069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:57.304781 containerd[1466]: time="2024-06-26T07:16:57.304638326Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jun 26 07:16:57.308273 containerd[1466]: time="2024-06-26T07:16:57.306637698Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:57.321205 containerd[1466]: time="2024-06-26T07:16:57.321121889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:16:57.324379 containerd[1466]: time="2024-06-26T07:16:57.324273611Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 5.721467098s" Jun 26 07:16:57.324379 containerd[1466]: time="2024-06-26T07:16:57.324355209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 26 07:16:57.391420 containerd[1466]: time="2024-06-26T07:16:57.391342465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 26 07:16:59.177383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 26 07:16:59.189829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:16:59.454623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:16:59.467062 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:16:59.588239 kubelet[1903]: E0626 07:16:59.588143 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:16:59.592433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:16:59.592716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:17:00.500646 containerd[1466]: time="2024-06-26T07:17:00.500547972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:00.504189 containerd[1466]: time="2024-06-26T07:17:00.503461891Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jun 26 07:17:00.507411 containerd[1466]: time="2024-06-26T07:17:00.507329475Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:00.516294 containerd[1466]: time="2024-06-26T07:17:00.515750863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:00.530090 containerd[1466]: time="2024-06-26T07:17:00.522528989Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 3.131112366s" Jun 26 07:17:00.530090 containerd[1466]: time="2024-06-26T07:17:00.529958350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 26 07:17:00.592697 containerd[1466]: time="2024-06-26T07:17:00.591795330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 26 07:17:04.369126 containerd[1466]: time="2024-06-26T07:17:04.367587111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:04.374145 containerd[1466]: time="2024-06-26T07:17:04.374014851Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jun 26 07:17:04.378626 containerd[1466]: time="2024-06-26T07:17:04.378528772Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:04.399391 containerd[1466]: time="2024-06-26T07:17:04.399298649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:04.405471 containerd[1466]: time="2024-06-26T07:17:04.402731874Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 3.810840818s" Jun 26 07:17:04.405471 containerd[1466]: time="2024-06-26T07:17:04.402803448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 26 07:17:04.485192 containerd[1466]: time="2024-06-26T07:17:04.485040380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 26 07:17:07.176018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548554898.mount: Deactivated successfully. Jun 26 07:17:08.268802 containerd[1466]: time="2024-06-26T07:17:08.267295754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:08.275385 containerd[1466]: time="2024-06-26T07:17:08.275296969Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jun 26 07:17:08.291730 containerd[1466]: time="2024-06-26T07:17:08.291646040Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:08.303086 containerd[1466]: time="2024-06-26T07:17:08.301269702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:08.303086 containerd[1466]: time="2024-06-26T07:17:08.302780088Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 3.81707272s" Jun 26 07:17:08.303086 containerd[1466]: time="2024-06-26T07:17:08.302859210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 26 07:17:08.353311 containerd[1466]: time="2024-06-26T07:17:08.353209634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 26 07:17:09.183225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260821045.mount: Deactivated successfully. Jun 26 07:17:09.678580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 26 07:17:09.690994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:10.106533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:10.128493 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:17:10.345237 kubelet[1957]: E0626 07:17:10.345068 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:17:10.351844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:17:10.352161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:17:11.739568 containerd[1466]: time="2024-06-26T07:17:11.738886450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:11.753257 containerd[1466]: time="2024-06-26T07:17:11.753062658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 26 07:17:11.766092 containerd[1466]: time="2024-06-26T07:17:11.762736786Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:11.782022 containerd[1466]: time="2024-06-26T07:17:11.781941009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:11.797578 containerd[1466]: time="2024-06-26T07:17:11.797472962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.44417956s" Jun 26 07:17:11.800775 containerd[1466]: time="2024-06-26T07:17:11.798575110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 26 07:17:11.909408 containerd[1466]: time="2024-06-26T07:17:11.909336531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 26 07:17:13.062083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871792463.mount: Deactivated successfully. Jun 26 07:17:13.111746 containerd[1466]: time="2024-06-26T07:17:13.111601349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:13.116120 containerd[1466]: time="2024-06-26T07:17:13.115574666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 26 07:17:13.119308 containerd[1466]: time="2024-06-26T07:17:13.119238001Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:13.131837 containerd[1466]: time="2024-06-26T07:17:13.130139107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:13.131837 containerd[1466]: time="2024-06-26T07:17:13.131563859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.221828451s" Jun 26 07:17:13.131837 containerd[1466]: time="2024-06-26T07:17:13.131656886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 26 07:17:13.209912 containerd[1466]: time="2024-06-26T07:17:13.209849333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 26 07:17:14.173951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641270478.mount: Deactivated successfully. Jun 26 07:17:19.070451 containerd[1466]: time="2024-06-26T07:17:19.068943791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:19.077706 containerd[1466]: time="2024-06-26T07:17:19.076915871Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 26 07:17:19.087098 containerd[1466]: time="2024-06-26T07:17:19.086893797Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:19.100403 containerd[1466]: time="2024-06-26T07:17:19.099173262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:17:19.103287 containerd[1466]: time="2024-06-26T07:17:19.103210108Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.892983898s" Jun 26 07:17:19.103602 containerd[1466]: time="2024-06-26T07:17:19.103559758Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 26 07:17:19.440321 update_engine[1449]: I0626 07:17:19.440181 1449 update_attempter.cc:509] Updating boot flags... Jun 26 07:17:19.544106 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2073) Jun 26 07:17:19.697242 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2074) Jun 26 07:17:19.897681 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2074) Jun 26 07:17:20.427648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 26 07:17:20.438640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:20.783400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:20.801862 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 26 07:17:20.939131 kubelet[2139]: E0626 07:17:20.938892 2139 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 26 07:17:20.943616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 26 07:17:20.944205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 26 07:17:25.389376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:25.400632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:25.463506 systemd[1]: Reloading requested from client PID 2154 ('systemctl') (unit session-9.scope)... Jun 26 07:17:25.463846 systemd[1]: Reloading... Jun 26 07:17:25.678320 zram_generator::config[2188]: No configuration found. Jun 26 07:17:26.002433 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:17:26.180556 systemd[1]: Reloading finished in 715 ms. Jun 26 07:17:26.305895 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 26 07:17:26.306032 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 26 07:17:26.307462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:26.322968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:26.507257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:26.537130 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:17:26.639160 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:17:26.640027 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:17:26.640027 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:17:26.640666 kubelet[2246]: I0626 07:17:26.640514 2246 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:17:27.732913 kubelet[2246]: I0626 07:17:27.732827 2246 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 26 07:17:27.732913 kubelet[2246]: I0626 07:17:27.732889 2246 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:17:27.733704 kubelet[2246]: I0626 07:17:27.733375 2246 server.go:919] "Client rotation is on, will bootstrap in background" Jun 26 07:17:27.786462 kubelet[2246]: I0626 07:17:27.782552 2246 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:17:27.786462 kubelet[2246]: E0626 07:17:27.783425 2246 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.154.167:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.824079 kubelet[2246]: I0626 07:17:27.820164 2246 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:17:27.829091 kubelet[2246]: I0626 07:17:27.828672 2246 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:17:27.831077 kubelet[2246]: I0626 07:17:27.830960 2246 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:17:27.834094 kubelet[2246]: I0626 07:17:27.833513 2246 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:17:27.834094 kubelet[2246]: I0626 07:17:27.833664 2246 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:17:27.837618 kubelet[2246]: I0626 07:17:27.837540 2246 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:17:27.838495 kubelet[2246]: I0626 07:17:27.838457 2246 kubelet.go:396] "Attempting to sync node with API server" Jun 26 07:17:27.839145 kubelet[2246]: I0626 07:17:27.838731 2246 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:17:27.839145 kubelet[2246]: I0626 07:17:27.838801 2246 kubelet.go:312] "Adding apiserver pod source" Jun 26 07:17:27.839145 kubelet[2246]: I0626 07:17:27.838819 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:17:27.841333 kubelet[2246]: W0626 07:17:27.841215 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.154.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-d66a9e5a9c&limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.841333 kubelet[2246]: E0626 07:17:27.841316 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.154.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-d66a9e5a9c&limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.842718 kubelet[2246]: W0626 07:17:27.842502 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.154.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.842718 kubelet[2246]: E0626 07:17:27.842584 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.154.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.843321 kubelet[2246]: I0626 07:17:27.842881 2246 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:17:27.858808 kubelet[2246]: I0626 07:17:27.858732 2246 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 26 07:17:27.866465 kubelet[2246]: W0626 07:17:27.865098 2246 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 26 07:17:27.866967 kubelet[2246]: I0626 07:17:27.866926 2246 server.go:1256] "Started kubelet" Jun 26 07:17:27.876667 kubelet[2246]: I0626 07:17:27.872413 2246 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:17:27.876667 kubelet[2246]: I0626 07:17:27.874109 2246 server.go:461] "Adding debug handlers to kubelet server" Jun 26 07:17:27.881019 kubelet[2246]: I0626 07:17:27.880353 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:17:27.885331 kubelet[2246]: I0626 07:17:27.885247 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 26 07:17:27.885951 kubelet[2246]: I0626 07:17:27.885910 2246 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:17:27.900094 kubelet[2246]: I0626 07:17:27.899419 2246 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:17:27.901419 kubelet[2246]: I0626 07:17:27.901181 2246 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:17:27.901419 kubelet[2246]: I0626 07:17:27.901373 2246 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:17:27.904851 kubelet[2246]: W0626 07:17:27.903390 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.904851 kubelet[2246]: E0626 07:17:27.903479 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.904851 kubelet[2246]: E0626 07:17:27.903608 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.154.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-d66a9e5a9c?timeout=10s\": dial tcp 146.190.154.167:6443: connect: connection refused" interval="200ms" Jun 26 07:17:27.905876 kubelet[2246]: E0626 07:17:27.905798 2246 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.154.167:6443/api/v1/namespaces/default/events\": dial tcp 146.190.154.167:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-0-d66a9e5a9c.17dc7caec7cc1314 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-0-d66a9e5a9c,UID:ci-4012.0.0-0-d66a9e5a9c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-0-d66a9e5a9c,},FirstTimestamp:2024-06-26 07:17:27.866880788 +0000 UTC m=+1.319807040,LastTimestamp:2024-06-26 07:17:27.866880788 +0000 UTC m=+1.319807040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-0-d66a9e5a9c,}" Jun 26 07:17:27.912316 kubelet[2246]: I0626 07:17:27.911102 2246 factory.go:221] Registration of the systemd container factory successfully Jun 26 07:17:27.912316 kubelet[2246]: I0626 07:17:27.911270 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 26 07:17:27.917327 kubelet[2246]: I0626 07:17:27.916904 2246 factory.go:221] Registration of the containerd container factory successfully Jun 26 07:17:27.945775 kubelet[2246]: E0626 07:17:27.943635 2246 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:17:27.950327 kubelet[2246]: I0626 07:17:27.950280 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:17:27.954969 kubelet[2246]: I0626 07:17:27.954921 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:17:27.955256 kubelet[2246]: I0626 07:17:27.955234 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:17:27.955416 kubelet[2246]: I0626 07:17:27.955400 2246 kubelet.go:2329] "Starting kubelet main sync loop" Jun 26 07:17:27.955571 kubelet[2246]: E0626 07:17:27.955553 2246 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:17:27.956868 kubelet[2246]: W0626 07:17:27.956772 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.154.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.956868 kubelet[2246]: E0626 07:17:27.956841 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.154.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:27.958150 kubelet[2246]: I0626 07:17:27.957889 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:17:27.958150 kubelet[2246]: I0626 07:17:27.957922 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:17:27.958150 kubelet[2246]: I0626 07:17:27.957951 2246 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:17:27.968140 kubelet[2246]: I0626 07:17:27.968092 2246 policy_none.go:49] "None policy: Start" Jun 26 07:17:27.970604 kubelet[2246]: I0626 07:17:27.970079 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 26 07:17:27.970604 kubelet[2246]: I0626 07:17:27.970131 2246 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:17:27.994981 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 26 07:17:28.010959 kubelet[2246]: I0626 07:17:28.010908 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.013417 kubelet[2246]: E0626 07:17:28.013323 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.154.167:6443/api/v1/nodes\": dial tcp 146.190.154.167:6443: connect: connection refused" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.018468 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 26 07:17:28.026713 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 26 07:17:28.052678 kubelet[2246]: I0626 07:17:28.052587 2246 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:17:28.054450 kubelet[2246]: I0626 07:17:28.054398 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:17:28.057499 kubelet[2246]: I0626 07:17:28.055776 2246 topology_manager.go:215] "Topology Admit Handler" podUID="a86c5c60188a2bc8035c6dfc102f50dc" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.057499 kubelet[2246]: I0626 07:17:28.057332 2246 topology_manager.go:215] "Topology Admit Handler" podUID="2ab901e3fcd278adc2e41008d09ea0ea" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.060520 kubelet[2246]: I0626 07:17:28.060029 2246 topology_manager.go:215] "Topology Admit Handler" podUID="31dd12f6c3fa0963584d8a2f130eeceb" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.063930 kubelet[2246]: E0626 07:17:28.063877 2246 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-0-d66a9e5a9c\" not found" Jun 26 07:17:28.087725 systemd[1]: Created slice kubepods-burstable-poda86c5c60188a2bc8035c6dfc102f50dc.slice - libcontainer container kubepods-burstable-poda86c5c60188a2bc8035c6dfc102f50dc.slice. Jun 26 07:17:28.106256 kubelet[2246]: E0626 07:17:28.105907 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.154.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-d66a9e5a9c?timeout=10s\": dial tcp 146.190.154.167:6443: connect: connection refused" interval="400ms" Jun 26 07:17:28.107813 systemd[1]: Created slice kubepods-burstable-pod2ab901e3fcd278adc2e41008d09ea0ea.slice - libcontainer container kubepods-burstable-pod2ab901e3fcd278adc2e41008d09ea0ea.slice. Jun 26 07:17:28.110949 kubelet[2246]: I0626 07:17:28.109931 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.110949 kubelet[2246]: I0626 07:17:28.110012 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.110949 kubelet[2246]: I0626 07:17:28.110079 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.110949 kubelet[2246]: I0626 07:17:28.110231 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.110949 kubelet[2246]: I0626 07:17:28.110355 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31dd12f6c3fa0963584d8a2f130eeceb-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"31dd12f6c3fa0963584d8a2f130eeceb\") " pod="kube-system/kube-scheduler-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.111490 kubelet[2246]: I0626 07:17:28.110401 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.111490 kubelet[2246]: I0626 07:17:28.110432 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.111490 kubelet[2246]: I0626 07:17:28.110485 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.111490 kubelet[2246]: I0626 07:17:28.110518 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.142306 systemd[1]: Created slice kubepods-burstable-pod31dd12f6c3fa0963584d8a2f130eeceb.slice - libcontainer container kubepods-burstable-pod31dd12f6c3fa0963584d8a2f130eeceb.slice. Jun 26 07:17:28.214938 kubelet[2246]: I0626 07:17:28.214881 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.215585 kubelet[2246]: E0626 07:17:28.215480 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.154.167:6443/api/v1/nodes\": dial tcp 146.190.154.167:6443: connect: connection refused" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.400866 kubelet[2246]: E0626 07:17:28.400456 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:28.403461 containerd[1466]: time="2024-06-26T07:17:28.401524892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-0-d66a9e5a9c,Uid:a86c5c60188a2bc8035c6dfc102f50dc,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:28.419622 kubelet[2246]: E0626 07:17:28.419210 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:28.434880 containerd[1466]: time="2024-06-26T07:17:28.432934052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c,Uid:2ab901e3fcd278adc2e41008d09ea0ea,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:28.454208 kubelet[2246]: E0626 07:17:28.452244 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:28.454407 containerd[1466]: time="2024-06-26T07:17:28.453283741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-0-d66a9e5a9c,Uid:31dd12f6c3fa0963584d8a2f130eeceb,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:28.507917 kubelet[2246]: E0626 07:17:28.507593 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.154.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-d66a9e5a9c?timeout=10s\": dial tcp 146.190.154.167:6443: connect: connection refused" interval="800ms" Jun 26 07:17:28.620361 kubelet[2246]: I0626 07:17:28.619374 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.620361 kubelet[2246]: E0626 07:17:28.620236 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.154.167:6443/api/v1/nodes\": dial tcp 146.190.154.167:6443: connect: connection refused" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:28.707537 kubelet[2246]: W0626 07:17:28.707438 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.154.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-d66a9e5a9c&limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:28.708215 kubelet[2246]: E0626 07:17:28.708178 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.154.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-0-d66a9e5a9c&limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:28.797249 kubelet[2246]: E0626 07:17:28.797135 2246 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.154.167:6443/api/v1/namespaces/default/events\": dial tcp 146.190.154.167:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-0-d66a9e5a9c.17dc7caec7cc1314 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-0-d66a9e5a9c,UID:ci-4012.0.0-0-d66a9e5a9c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-0-d66a9e5a9c,},FirstTimestamp:2024-06-26 07:17:27.866880788 +0000 UTC m=+1.319807040,LastTimestamp:2024-06-26 07:17:27.866880788 +0000 UTC m=+1.319807040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-0-d66a9e5a9c,}" Jun 26 07:17:28.941850 kubelet[2246]: W0626 07:17:28.940585 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:28.941850 kubelet[2246]: E0626 07:17:28.940696 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.232794 kubelet[2246]: W0626 07:17:29.232594 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.154.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.232794 kubelet[2246]: E0626 07:17:29.232740 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.154.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.255626 kubelet[2246]: W0626 07:17:29.253957 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.154.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.255626 kubelet[2246]: E0626 07:17:29.254109 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.154.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.287053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86440933.mount: Deactivated successfully. Jun 26 07:17:29.309878 kubelet[2246]: E0626 07:17:29.309778 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.154.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-0-d66a9e5a9c?timeout=10s\": dial tcp 146.190.154.167:6443: connect: connection refused" interval="1.6s" Jun 26 07:17:29.315397 containerd[1466]: time="2024-06-26T07:17:29.315269458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:17:29.345806 containerd[1466]: time="2024-06-26T07:17:29.345365798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 26 07:17:29.352101 containerd[1466]: time="2024-06-26T07:17:29.350509714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:17:29.359232 containerd[1466]: time="2024-06-26T07:17:29.354752675Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:17:29.366786 containerd[1466]: time="2024-06-26T07:17:29.364972665Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:17:29.373409 containerd[1466]: time="2024-06-26T07:17:29.373158213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:17:29.378227 containerd[1466]: time="2024-06-26T07:17:29.376111132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 26 07:17:29.383579 containerd[1466]: time="2024-06-26T07:17:29.383501546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 26 07:17:29.385578 containerd[1466]: time="2024-06-26T07:17:29.385501524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 952.331209ms" Jun 26 07:17:29.389883 containerd[1466]: time="2024-06-26T07:17:29.389801697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 936.375383ms" Jun 26 07:17:29.390706 containerd[1466]: time="2024-06-26T07:17:29.390452444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 988.779988ms" Jun 26 07:17:29.423724 kubelet[2246]: I0626 07:17:29.423104 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:29.435106 kubelet[2246]: E0626 07:17:29.427262 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.154.167:6443/api/v1/nodes\": dial tcp 146.190.154.167:6443: connect: connection refused" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:29.743187 containerd[1466]: time="2024-06-26T07:17:29.742358535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:29.743187 containerd[1466]: time="2024-06-26T07:17:29.742464243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.743187 containerd[1466]: time="2024-06-26T07:17:29.742490820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:29.743187 containerd[1466]: time="2024-06-26T07:17:29.742510741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.748507 containerd[1466]: time="2024-06-26T07:17:29.748244111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:29.750421 containerd[1466]: time="2024-06-26T07:17:29.750273413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.751619 containerd[1466]: time="2024-06-26T07:17:29.751519991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:29.752308 containerd[1466]: time="2024-06-26T07:17:29.752207187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.764829 containerd[1466]: time="2024-06-26T07:17:29.764366718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:29.764829 containerd[1466]: time="2024-06-26T07:17:29.764473034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.764829 containerd[1466]: time="2024-06-26T07:17:29.764506950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:29.764829 containerd[1466]: time="2024-06-26T07:17:29.764534854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:29.792766 systemd[1]: Started cri-containerd-429c86c79566ce29d11c07a481cb8868c82b64b2714432ed10b3ffd8be9ea675.scope - libcontainer container 429c86c79566ce29d11c07a481cb8868c82b64b2714432ed10b3ffd8be9ea675. Jun 26 07:17:29.813536 systemd[1]: Started cri-containerd-e2cffe9df2102bbe249b353e02ab4df0da1647767429fcf0fe1a4e6089042121.scope - libcontainer container e2cffe9df2102bbe249b353e02ab4df0da1647767429fcf0fe1a4e6089042121. Jun 26 07:17:29.830645 kubelet[2246]: E0626 07:17:29.829009 2246 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.154.167:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:29.862850 systemd[1]: Started cri-containerd-56a50abd04e6e536fbc7d7bef856b824bcc2b0c82207b138d23ff20524ef616b.scope - libcontainer container 56a50abd04e6e536fbc7d7bef856b824bcc2b0c82207b138d23ff20524ef616b. Jun 26 07:17:29.942150 containerd[1466]: time="2024-06-26T07:17:29.941627357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-0-d66a9e5a9c,Uid:31dd12f6c3fa0963584d8a2f130eeceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"429c86c79566ce29d11c07a481cb8868c82b64b2714432ed10b3ffd8be9ea675\"" Jun 26 07:17:29.946467 kubelet[2246]: E0626 07:17:29.946173 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:29.963123 containerd[1466]: time="2024-06-26T07:17:29.961897714Z" level=info msg="CreateContainer within sandbox \"429c86c79566ce29d11c07a481cb8868c82b64b2714432ed10b3ffd8be9ea675\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 26 07:17:29.981597 containerd[1466]: time="2024-06-26T07:17:29.981087186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c,Uid:2ab901e3fcd278adc2e41008d09ea0ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2cffe9df2102bbe249b353e02ab4df0da1647767429fcf0fe1a4e6089042121\"" Jun 26 07:17:29.983544 kubelet[2246]: E0626 07:17:29.983246 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:30.002457 containerd[1466]: time="2024-06-26T07:17:30.002242280Z" level=info msg="CreateContainer within sandbox \"e2cffe9df2102bbe249b353e02ab4df0da1647767429fcf0fe1a4e6089042121\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 26 07:17:30.035120 containerd[1466]: time="2024-06-26T07:17:30.035017037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-0-d66a9e5a9c,Uid:a86c5c60188a2bc8035c6dfc102f50dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a50abd04e6e536fbc7d7bef856b824bcc2b0c82207b138d23ff20524ef616b\"" Jun 26 07:17:30.036742 kubelet[2246]: E0626 07:17:30.036706 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:30.055521 containerd[1466]: time="2024-06-26T07:17:30.055447651Z" level=info msg="CreateContainer within sandbox \"56a50abd04e6e536fbc7d7bef856b824bcc2b0c82207b138d23ff20524ef616b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 26 07:17:30.068965 containerd[1466]: time="2024-06-26T07:17:30.068657261Z" level=info msg="CreateContainer within sandbox \"429c86c79566ce29d11c07a481cb8868c82b64b2714432ed10b3ffd8be9ea675\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d57fb3d03972983ea81d9bc4ba906445dac818cd8ce4fb7df687203180773452\"" Jun 26 07:17:30.073235 containerd[1466]: time="2024-06-26T07:17:30.071556957Z" level=info msg="StartContainer for \"d57fb3d03972983ea81d9bc4ba906445dac818cd8ce4fb7df687203180773452\"" Jun 26 07:17:30.082525 containerd[1466]: time="2024-06-26T07:17:30.082439117Z" level=info msg="CreateContainer within sandbox \"e2cffe9df2102bbe249b353e02ab4df0da1647767429fcf0fe1a4e6089042121\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26b349d704d55ce35259a0c0b2d9d0ca46db509b21fb04f5f80f8191e4b91368\"" Jun 26 07:17:30.085085 containerd[1466]: time="2024-06-26T07:17:30.083197478Z" level=info msg="StartContainer for \"26b349d704d55ce35259a0c0b2d9d0ca46db509b21fb04f5f80f8191e4b91368\"" Jun 26 07:17:30.124592 containerd[1466]: time="2024-06-26T07:17:30.123970147Z" level=info msg="CreateContainer within sandbox \"56a50abd04e6e536fbc7d7bef856b824bcc2b0c82207b138d23ff20524ef616b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a1a2ed5c248682484b187fb8d84670e8a672659147c6acff5f6203a6e3a5647\"" Jun 26 07:17:30.127087 containerd[1466]: time="2024-06-26T07:17:30.127001478Z" level=info msg="StartContainer for \"4a1a2ed5c248682484b187fb8d84670e8a672659147c6acff5f6203a6e3a5647\"" Jun 26 07:17:30.138673 systemd[1]: Started cri-containerd-d57fb3d03972983ea81d9bc4ba906445dac818cd8ce4fb7df687203180773452.scope - libcontainer container d57fb3d03972983ea81d9bc4ba906445dac818cd8ce4fb7df687203180773452. Jun 26 07:17:30.166397 systemd[1]: Started cri-containerd-26b349d704d55ce35259a0c0b2d9d0ca46db509b21fb04f5f80f8191e4b91368.scope - libcontainer container 26b349d704d55ce35259a0c0b2d9d0ca46db509b21fb04f5f80f8191e4b91368. Jun 26 07:17:30.226607 systemd[1]: Started cri-containerd-4a1a2ed5c248682484b187fb8d84670e8a672659147c6acff5f6203a6e3a5647.scope - libcontainer container 4a1a2ed5c248682484b187fb8d84670e8a672659147c6acff5f6203a6e3a5647. Jun 26 07:17:30.358251 containerd[1466]: time="2024-06-26T07:17:30.356187422Z" level=info msg="StartContainer for \"d57fb3d03972983ea81d9bc4ba906445dac818cd8ce4fb7df687203180773452\" returns successfully" Jun 26 07:17:30.372267 containerd[1466]: time="2024-06-26T07:17:30.370416030Z" level=info msg="StartContainer for \"4a1a2ed5c248682484b187fb8d84670e8a672659147c6acff5f6203a6e3a5647\" returns successfully" Jun 26 07:17:30.393602 containerd[1466]: time="2024-06-26T07:17:30.393490506Z" level=info msg="StartContainer for \"26b349d704d55ce35259a0c0b2d9d0ca46db509b21fb04f5f80f8191e4b91368\" returns successfully" Jun 26 07:17:30.679968 kubelet[2246]: W0626 07:17:30.679872 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:30.680288 kubelet[2246]: E0626 07:17:30.679992 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.154.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.154.167:6443: connect: connection refused Jun 26 07:17:30.996012 kubelet[2246]: E0626 07:17:30.995603 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:31.005745 kubelet[2246]: E0626 07:17:31.005687 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:31.013205 kubelet[2246]: E0626 07:17:31.013111 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:31.030789 kubelet[2246]: I0626 07:17:31.030736 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:32.016306 kubelet[2246]: E0626 07:17:32.016245 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:32.018570 kubelet[2246]: E0626 07:17:32.018088 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:33.020210 kubelet[2246]: E0626 07:17:33.020142 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:33.021026 kubelet[2246]: E0626 07:17:33.020430 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:33.421289 kubelet[2246]: E0626 07:17:33.420643 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-0-d66a9e5a9c\" not found" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:33.475559 kubelet[2246]: I0626 07:17:33.475497 2246 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:33.846347 kubelet[2246]: I0626 07:17:33.846275 2246 apiserver.go:52] "Watching apiserver" Jun 26 07:17:33.902496 kubelet[2246]: I0626 07:17:33.902357 2246 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:17:37.361681 systemd[1]: Reloading requested from client PID 2519 ('systemctl') (unit session-9.scope)... Jun 26 07:17:37.362391 systemd[1]: Reloading... Jun 26 07:17:37.681180 zram_generator::config[2562]: No configuration found. Jun 26 07:17:38.136234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 26 07:17:38.452777 systemd[1]: Reloading finished in 1089 ms. Jun 26 07:17:38.555031 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:38.581276 systemd[1]: kubelet.service: Deactivated successfully. Jun 26 07:17:38.581919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:38.582055 systemd[1]: kubelet.service: Consumed 1.902s CPU time, 110.1M memory peak, 0B memory swap peak. Jun 26 07:17:38.612259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 26 07:17:38.951341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 26 07:17:38.986616 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 26 07:17:39.219948 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:17:39.219948 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 26 07:17:39.219948 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 26 07:17:39.227222 kubelet[2610]: I0626 07:17:39.225256 2610 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 26 07:17:39.263925 kubelet[2610]: I0626 07:17:39.259789 2610 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 26 07:17:39.263925 kubelet[2610]: I0626 07:17:39.259856 2610 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 26 07:17:39.263925 kubelet[2610]: I0626 07:17:39.260427 2610 server.go:919] "Client rotation is on, will bootstrap in background" Jun 26 07:17:39.278874 kubelet[2610]: I0626 07:17:39.277897 2610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 26 07:17:39.299243 kubelet[2610]: I0626 07:17:39.298845 2610 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 26 07:17:39.345474 kubelet[2610]: I0626 07:17:39.344937 2610 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 26 07:17:39.349991 kubelet[2610]: I0626 07:17:39.348418 2610 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 26 07:17:39.349991 kubelet[2610]: I0626 07:17:39.349889 2610 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.351897 2610 topology_manager.go:138] "Creating topology manager with none policy" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.351942 2610 container_manager_linux.go:301] "Creating device plugin manager" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.352013 2610 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.352275 2610 kubelet.go:396] "Attempting to sync node with API server" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.352307 2610 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.352351 2610 kubelet.go:312] "Adding apiserver pod source" Jun 26 07:17:39.358612 kubelet[2610]: I0626 07:17:39.352377 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 26 07:17:39.353679 sudo[2626]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 26 07:17:39.354260 sudo[2626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 26 07:17:39.383510 kubelet[2610]: I0626 07:17:39.376890 2610 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 26 07:17:39.383510 kubelet[2610]: I0626 07:17:39.377265 2610 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 26 07:17:39.383510 kubelet[2610]: I0626 07:17:39.378115 2610 server.go:1256] "Started kubelet" Jun 26 07:17:39.423793 kubelet[2610]: I0626 07:17:39.423697 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 26 07:17:39.426865 kubelet[2610]: I0626 07:17:39.426806 2610 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 26 07:17:39.433843 kubelet[2610]: I0626 07:17:39.429918 2610 server.go:461] "Adding debug handlers to kubelet server" Jun 26 07:17:39.444334 kubelet[2610]: I0626 07:17:39.443631 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 26 07:17:39.444334 kubelet[2610]: I0626 07:17:39.444101 2610 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 26 07:17:39.464937 kubelet[2610]: I0626 07:17:39.464201 2610 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 26 07:17:39.465326 kubelet[2610]: I0626 07:17:39.465149 2610 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 26 07:17:39.465461 kubelet[2610]: I0626 07:17:39.465431 2610 reconciler_new.go:29] "Reconciler: start to sync state" Jun 26 07:17:39.492488 kubelet[2610]: I0626 07:17:39.489716 2610 factory.go:221] Registration of the systemd container factory successfully Jun 26 07:17:39.497808 kubelet[2610]: I0626 07:17:39.494619 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 26 07:17:39.528151 kubelet[2610]: I0626 07:17:39.527355 2610 factory.go:221] Registration of the containerd container factory successfully Jun 26 07:17:39.556784 kubelet[2610]: I0626 07:17:39.556580 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 26 07:17:39.560116 kubelet[2610]: I0626 07:17:39.559784 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 26 07:17:39.560116 kubelet[2610]: I0626 07:17:39.559839 2610 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 26 07:17:39.560116 kubelet[2610]: I0626 07:17:39.559870 2610 kubelet.go:2329] "Starting kubelet main sync loop" Jun 26 07:17:39.560116 kubelet[2610]: E0626 07:17:39.559971 2610 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 26 07:17:39.565686 kubelet[2610]: E0626 07:17:39.565192 2610 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jun 26 07:17:39.588189 kubelet[2610]: I0626 07:17:39.587559 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.601264 kubelet[2610]: E0626 07:17:39.600954 2610 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 26 07:17:39.635187 kubelet[2610]: I0626 07:17:39.634920 2610 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.635187 kubelet[2610]: I0626 07:17:39.635082 2610 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.662531 kubelet[2610]: E0626 07:17:39.662236 2610 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 26 07:17:39.759936 kubelet[2610]: I0626 07:17:39.759418 2610 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 26 07:17:39.759936 kubelet[2610]: I0626 07:17:39.759459 2610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 26 07:17:39.759936 kubelet[2610]: I0626 07:17:39.759490 2610 state_mem.go:36] "Initialized new in-memory state store" Jun 26 07:17:39.763225 kubelet[2610]: I0626 07:17:39.762719 2610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 26 07:17:39.763225 kubelet[2610]: I0626 07:17:39.762798 2610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 26 07:17:39.763225 kubelet[2610]: I0626 07:17:39.762892 2610 policy_none.go:49] "None policy: Start" Jun 26 07:17:39.766648 kubelet[2610]: I0626 07:17:39.765964 2610 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 26 07:17:39.766648 kubelet[2610]: I0626 07:17:39.766205 2610 state_mem.go:35] "Initializing new in-memory state store" Jun 26 07:17:39.767625 kubelet[2610]: I0626 07:17:39.767349 2610 state_mem.go:75] "Updated machine memory state" Jun 26 07:17:39.783725 kubelet[2610]: I0626 07:17:39.781853 2610 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 26 07:17:39.786292 kubelet[2610]: I0626 07:17:39.785857 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 26 07:17:39.863926 kubelet[2610]: I0626 07:17:39.863796 2610 topology_manager.go:215] "Topology Admit Handler" podUID="a86c5c60188a2bc8035c6dfc102f50dc" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.864241 kubelet[2610]: I0626 07:17:39.864099 2610 topology_manager.go:215] "Topology Admit Handler" podUID="2ab901e3fcd278adc2e41008d09ea0ea" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.866695 kubelet[2610]: I0626 07:17:39.866526 2610 topology_manager.go:215] "Topology Admit Handler" podUID="31dd12f6c3fa0963584d8a2f130eeceb" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877054 kubelet[2610]: I0626 07:17:39.872800 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877054 kubelet[2610]: I0626 07:17:39.872923 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877054 kubelet[2610]: I0626 07:17:39.872987 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877054 kubelet[2610]: I0626 07:17:39.873220 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877054 kubelet[2610]: I0626 07:17:39.873308 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab901e3fcd278adc2e41008d09ea0ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"2ab901e3fcd278adc2e41008d09ea0ea\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877450 kubelet[2610]: I0626 07:17:39.873362 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877450 kubelet[2610]: I0626 07:17:39.873421 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.877450 kubelet[2610]: I0626 07:17:39.873485 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a86c5c60188a2bc8035c6dfc102f50dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"a86c5c60188a2bc8035c6dfc102f50dc\") " pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:39.900503 kubelet[2610]: W0626 07:17:39.897803 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:39.913454 kubelet[2610]: W0626 07:17:39.913406 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:39.919313 kubelet[2610]: W0626 07:17:39.919266 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 26 07:17:39.973958 kubelet[2610]: I0626 07:17:39.973892 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31dd12f6c3fa0963584d8a2f130eeceb-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-0-d66a9e5a9c\" (UID: \"31dd12f6c3fa0963584d8a2f130eeceb\") " pod="kube-system/kube-scheduler-ci-4012.0.0-0-d66a9e5a9c" Jun 26 07:17:40.204943 kubelet[2610]: E0626 07:17:40.204891 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.216982 kubelet[2610]: E0626 07:17:40.215452 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.221473 kubelet[2610]: E0626 07:17:40.221410 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.384580 kubelet[2610]: I0626 07:17:40.384128 2610 apiserver.go:52] "Watching apiserver" Jun 26 07:17:40.466667 kubelet[2610]: I0626 07:17:40.466307 2610 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 26 07:17:40.681661 kubelet[2610]: E0626 07:17:40.678250 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.681661 kubelet[2610]: E0626 07:17:40.681514 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.681661 kubelet[2610]: E0626 07:17:40.681514 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:40.681999 kubelet[2610]: I0626 07:17:40.681963 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-0-d66a9e5a9c" podStartSLOduration=1.681901528 podStartE2EDuration="1.681901528s" podCreationTimestamp="2024-06-26 07:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:40.677464379 +0000 UTC m=+1.654567861" watchObservedRunningTime="2024-06-26 07:17:40.681901528 +0000 UTC m=+1.659005002" Jun 26 07:17:40.724797 kubelet[2610]: I0626 07:17:40.723138 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-0-d66a9e5a9c" podStartSLOduration=1.723068493 podStartE2EDuration="1.723068493s" podCreationTimestamp="2024-06-26 07:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:40.716651331 +0000 UTC m=+1.693754808" watchObservedRunningTime="2024-06-26 07:17:40.723068493 +0000 UTC m=+1.700171988" Jun 26 07:17:40.798161 kubelet[2610]: I0626 07:17:40.798079 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-0-d66a9e5a9c" podStartSLOduration=1.797972375 podStartE2EDuration="1.797972375s" podCreationTimestamp="2024-06-26 07:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:40.762149555 +0000 UTC m=+1.739253032" watchObservedRunningTime="2024-06-26 07:17:40.797972375 +0000 UTC m=+1.775075856" Jun 26 07:17:40.918718 sudo[2626]: pam_unix(sudo:session): session closed for user root Jun 26 07:17:41.682002 kubelet[2610]: E0626 07:17:41.681550 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:43.768293 sudo[1669]: pam_unix(sudo:session): session closed for user root Jun 26 07:17:43.775018 sshd[1666]: pam_unix(sshd:session): session closed for user core Jun 26 07:17:43.785531 systemd[1]: sshd@8-146.190.154.167:22-147.75.109.163:50560.service: Deactivated successfully. Jun 26 07:17:43.795215 systemd[1]: session-9.scope: Deactivated successfully. Jun 26 07:17:43.801304 systemd[1]: session-9.scope: Consumed 9.299s CPU time, 135.3M memory peak, 0B memory swap peak. Jun 26 07:17:43.803514 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jun 26 07:17:43.805772 systemd-logind[1448]: Removed session 9. Jun 26 07:17:46.579927 kubelet[2610]: E0626 07:17:46.579426 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:46.699681 kubelet[2610]: E0626 07:17:46.699390 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:47.617577 kubelet[2610]: E0626 07:17:47.617451 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:47.704588 kubelet[2610]: E0626 07:17:47.704498 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:48.930641 kubelet[2610]: E0626 07:17:48.930561 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:49.712584 kubelet[2610]: E0626 07:17:49.712515 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:50.043757 kubelet[2610]: I0626 07:17:50.043553 2610 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 26 07:17:50.048118 containerd[1466]: time="2024-06-26T07:17:50.046356288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 26 07:17:50.049578 kubelet[2610]: I0626 07:17:50.047414 2610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 26 07:17:50.193851 kubelet[2610]: I0626 07:17:50.193580 2610 topology_manager.go:215] "Topology Admit Handler" podUID="044634a1-e64e-415c-8bdf-acf0f4e35386" podNamespace="kube-system" podName="kube-proxy-wdkdt" Jun 26 07:17:50.220625 systemd[1]: Created slice kubepods-besteffort-pod044634a1_e64e_415c_8bdf_acf0f4e35386.slice - libcontainer container kubepods-besteffort-pod044634a1_e64e_415c_8bdf_acf0f4e35386.slice. Jun 26 07:17:50.228506 kubelet[2610]: I0626 07:17:50.228448 2610 topology_manager.go:215] "Topology Admit Handler" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" podNamespace="kube-system" podName="cilium-kzfjf" Jun 26 07:17:50.248822 systemd[1]: Created slice kubepods-burstable-pod36bcdf01_5fdb_43df_99f8_e47854022908.slice - libcontainer container kubepods-burstable-pod36bcdf01_5fdb_43df_99f8_e47854022908.slice. Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282367 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-xtables-lock\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282442 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-kernel\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282479 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-hubble-tls\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282515 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-net\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282551 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/044634a1-e64e-415c-8bdf-acf0f4e35386-kube-proxy\") pod \"kube-proxy-wdkdt\" (UID: \"044634a1-e64e-415c-8bdf-acf0f4e35386\") " pod="kube-system/kube-proxy-wdkdt" Jun 26 07:17:50.283515 kubelet[2610]: I0626 07:17:50.282643 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-run\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282681 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-hostproc\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282710 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cni-path\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282747 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bcdf01-5fdb-43df-99f8-e47854022908-clustermesh-secrets\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282796 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044634a1-e64e-415c-8bdf-acf0f4e35386-lib-modules\") pod \"kube-proxy-wdkdt\" (UID: \"044634a1-e64e-415c-8bdf-acf0f4e35386\") " pod="kube-system/kube-proxy-wdkdt" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282838 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-cgroup\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.283993 kubelet[2610]: I0626 07:17:50.282882 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rx8l\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-kube-api-access-2rx8l\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.282919 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/044634a1-e64e-415c-8bdf-acf0f4e35386-xtables-lock\") pod \"kube-proxy-wdkdt\" (UID: \"044634a1-e64e-415c-8bdf-acf0f4e35386\") " pod="kube-system/kube-proxy-wdkdt" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.282968 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-etc-cni-netd\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.283031 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-lib-modules\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.283091 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-bpf-maps\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.283138 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vwl\" (UniqueName: \"kubernetes.io/projected/044634a1-e64e-415c-8bdf-acf0f4e35386-kube-api-access-48vwl\") pod \"kube-proxy-wdkdt\" (UID: \"044634a1-e64e-415c-8bdf-acf0f4e35386\") " pod="kube-system/kube-proxy-wdkdt" Jun 26 07:17:50.284368 kubelet[2610]: I0626 07:17:50.283279 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-config-path\") pod \"cilium-kzfjf\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " pod="kube-system/cilium-kzfjf" Jun 26 07:17:50.535135 kubelet[2610]: E0626 07:17:50.534326 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:50.539796 containerd[1466]: time="2024-06-26T07:17:50.539671810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdkdt,Uid:044634a1-e64e-415c-8bdf-acf0f4e35386,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:50.563014 kubelet[2610]: E0626 07:17:50.562933 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:50.564771 containerd[1466]: time="2024-06-26T07:17:50.564183778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzfjf,Uid:36bcdf01-5fdb-43df-99f8-e47854022908,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:50.636654 containerd[1466]: time="2024-06-26T07:17:50.635854208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:50.636654 containerd[1466]: time="2024-06-26T07:17:50.635975902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:50.636654 containerd[1466]: time="2024-06-26T07:17:50.636088897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:50.636654 containerd[1466]: time="2024-06-26T07:17:50.636117042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:50.665839 containerd[1466]: time="2024-06-26T07:17:50.665619974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:50.668867 containerd[1466]: time="2024-06-26T07:17:50.668198272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:50.668867 containerd[1466]: time="2024-06-26T07:17:50.668278784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:50.668867 containerd[1466]: time="2024-06-26T07:17:50.668300190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:50.719263 systemd[1]: Started cri-containerd-479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa.scope - libcontainer container 479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa. Jun 26 07:17:50.742434 systemd[1]: Started cri-containerd-f70f6a182ef59c12363a1f107b7c2492fe2e828a881768a6aebad846ac242147.scope - libcontainer container f70f6a182ef59c12363a1f107b7c2492fe2e828a881768a6aebad846ac242147. Jun 26 07:17:50.775084 kubelet[2610]: I0626 07:17:50.774872 2610 topology_manager.go:215] "Topology Admit Handler" podUID="a2d78978-79ab-4b06-9d9b-3c67bc431161" podNamespace="kube-system" podName="cilium-operator-5cc964979-zsr72" Jun 26 07:17:50.793988 systemd[1]: Created slice kubepods-besteffort-poda2d78978_79ab_4b06_9d9b_3c67bc431161.slice - libcontainer container kubepods-besteffort-poda2d78978_79ab_4b06_9d9b_3c67bc431161.slice. Jun 26 07:17:50.890149 kubelet[2610]: I0626 07:17:50.888711 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tc8\" (UniqueName: \"kubernetes.io/projected/a2d78978-79ab-4b06-9d9b-3c67bc431161-kube-api-access-g5tc8\") pod \"cilium-operator-5cc964979-zsr72\" (UID: \"a2d78978-79ab-4b06-9d9b-3c67bc431161\") " pod="kube-system/cilium-operator-5cc964979-zsr72" Jun 26 07:17:50.890149 kubelet[2610]: I0626 07:17:50.888787 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d78978-79ab-4b06-9d9b-3c67bc431161-cilium-config-path\") pod \"cilium-operator-5cc964979-zsr72\" (UID: \"a2d78978-79ab-4b06-9d9b-3c67bc431161\") " pod="kube-system/cilium-operator-5cc964979-zsr72" Jun 26 07:17:50.902079 containerd[1466]: time="2024-06-26T07:17:50.902008790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzfjf,Uid:36bcdf01-5fdb-43df-99f8-e47854022908,Namespace:kube-system,Attempt:0,} returns sandbox id \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\"" Jun 26 07:17:50.915484 kubelet[2610]: E0626 07:17:50.914244 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:50.924369 containerd[1466]: time="2024-06-26T07:17:50.924301324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdkdt,Uid:044634a1-e64e-415c-8bdf-acf0f4e35386,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70f6a182ef59c12363a1f107b7c2492fe2e828a881768a6aebad846ac242147\"" Jun 26 07:17:50.931248 kubelet[2610]: E0626 07:17:50.931151 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:50.933680 containerd[1466]: time="2024-06-26T07:17:50.933629416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 26 07:17:50.953574 containerd[1466]: time="2024-06-26T07:17:50.953281633Z" level=info msg="CreateContainer within sandbox \"f70f6a182ef59c12363a1f107b7c2492fe2e828a881768a6aebad846ac242147\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 26 07:17:51.036179 containerd[1466]: time="2024-06-26T07:17:51.036105562Z" level=info msg="CreateContainer within sandbox \"f70f6a182ef59c12363a1f107b7c2492fe2e828a881768a6aebad846ac242147\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"57036ace736f9d1f41cedf486490bac07192a1d87d148dcd2d783f327f390133\"" Jun 26 07:17:51.039113 containerd[1466]: time="2024-06-26T07:17:51.037673684Z" level=info msg="StartContainer for \"57036ace736f9d1f41cedf486490bac07192a1d87d148dcd2d783f327f390133\"" Jun 26 07:17:51.095750 systemd[1]: Started cri-containerd-57036ace736f9d1f41cedf486490bac07192a1d87d148dcd2d783f327f390133.scope - libcontainer container 57036ace736f9d1f41cedf486490bac07192a1d87d148dcd2d783f327f390133. Jun 26 07:17:51.100910 kubelet[2610]: E0626 07:17:51.100868 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:51.106026 containerd[1466]: time="2024-06-26T07:17:51.105138103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zsr72,Uid:a2d78978-79ab-4b06-9d9b-3c67bc431161,Namespace:kube-system,Attempt:0,}" Jun 26 07:17:51.182746 containerd[1466]: time="2024-06-26T07:17:51.182239808Z" level=info msg="StartContainer for \"57036ace736f9d1f41cedf486490bac07192a1d87d148dcd2d783f327f390133\" returns successfully" Jun 26 07:17:51.203097 containerd[1466]: time="2024-06-26T07:17:51.202785712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:17:51.203097 containerd[1466]: time="2024-06-26T07:17:51.202881677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:51.203097 containerd[1466]: time="2024-06-26T07:17:51.202906372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:17:51.203097 containerd[1466]: time="2024-06-26T07:17:51.202928474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:17:51.252547 systemd[1]: Started cri-containerd-9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b.scope - libcontainer container 9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b. Jun 26 07:17:51.375326 containerd[1466]: time="2024-06-26T07:17:51.375157419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zsr72,Uid:a2d78978-79ab-4b06-9d9b-3c67bc431161,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\"" Jun 26 07:17:51.376731 kubelet[2610]: E0626 07:17:51.376625 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:51.725169 kubelet[2610]: E0626 07:17:51.724690 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:17:51.776239 kubelet[2610]: I0626 07:17:51.775962 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wdkdt" podStartSLOduration=1.772777142 podStartE2EDuration="1.772777142s" podCreationTimestamp="2024-06-26 07:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:17:51.772168133 +0000 UTC m=+12.749271611" watchObservedRunningTime="2024-06-26 07:17:51.772777142 +0000 UTC m=+12.749880620" Jun 26 07:18:01.000375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4279362112.mount: Deactivated successfully. Jun 26 07:18:11.624560 containerd[1466]: time="2024-06-26T07:18:11.624224780Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:11.629266 containerd[1466]: time="2024-06-26T07:18:11.629181118Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Jun 26 07:18:11.637126 containerd[1466]: time="2024-06-26T07:18:11.636277799Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:11.647425 containerd[1466]: time="2024-06-26T07:18:11.647349223Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.713374894s" Jun 26 07:18:11.647761 containerd[1466]: time="2024-06-26T07:18:11.647698485Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 26 07:18:11.650498 containerd[1466]: time="2024-06-26T07:18:11.649993493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 26 07:18:11.657834 containerd[1466]: time="2024-06-26T07:18:11.657409071Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 26 07:18:11.824617 containerd[1466]: time="2024-06-26T07:18:11.824536486Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\"" Jun 26 07:18:11.826867 containerd[1466]: time="2024-06-26T07:18:11.826795057Z" level=info msg="StartContainer for \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\"" Jun 26 07:18:12.045505 systemd[1]: Started cri-containerd-6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2.scope - libcontainer container 6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2. Jun 26 07:18:12.186088 containerd[1466]: time="2024-06-26T07:18:12.185615562Z" level=info msg="StartContainer for \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\" returns successfully" Jun 26 07:18:12.207943 systemd[1]: cri-containerd-6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2.scope: Deactivated successfully. Jun 26 07:18:12.713961 containerd[1466]: time="2024-06-26T07:18:12.678747705Z" level=info msg="shim disconnected" id=6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2 namespace=k8s.io Jun 26 07:18:12.715323 containerd[1466]: time="2024-06-26T07:18:12.713974340Z" level=warning msg="cleaning up after shim disconnected" id=6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2 namespace=k8s.io Jun 26 07:18:12.715323 containerd[1466]: time="2024-06-26T07:18:12.714002704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:12.799836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2-rootfs.mount: Deactivated successfully. Jun 26 07:18:13.160415 kubelet[2610]: E0626 07:18:13.158890 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:13.185196 containerd[1466]: time="2024-06-26T07:18:13.181950799Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 26 07:18:13.318015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775093717.mount: Deactivated successfully. Jun 26 07:18:13.349059 containerd[1466]: time="2024-06-26T07:18:13.347963977Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\"" Jun 26 07:18:13.352240 containerd[1466]: time="2024-06-26T07:18:13.351859441Z" level=info msg="StartContainer for \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\"" Jun 26 07:18:13.462415 systemd[1]: Started cri-containerd-fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6.scope - libcontainer container fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6. Jun 26 07:18:13.567207 containerd[1466]: time="2024-06-26T07:18:13.566862463Z" level=info msg="StartContainer for \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\" returns successfully" Jun 26 07:18:13.613160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 26 07:18:13.613800 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:18:13.613945 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:18:13.633743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 26 07:18:13.640595 systemd[1]: cri-containerd-fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6.scope: Deactivated successfully. Jun 26 07:18:13.698947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 26 07:18:13.759996 containerd[1466]: time="2024-06-26T07:18:13.759113889Z" level=info msg="shim disconnected" id=fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6 namespace=k8s.io Jun 26 07:18:13.759996 containerd[1466]: time="2024-06-26T07:18:13.759522342Z" level=warning msg="cleaning up after shim disconnected" id=fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6 namespace=k8s.io Jun 26 07:18:13.759996 containerd[1466]: time="2024-06-26T07:18:13.759588429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:13.814666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6-rootfs.mount: Deactivated successfully. Jun 26 07:18:13.817908 containerd[1466]: time="2024-06-26T07:18:13.817799581Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:18:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:18:14.168961 kubelet[2610]: E0626 07:18:14.167092 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:14.195761 containerd[1466]: time="2024-06-26T07:18:14.193719003Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 26 07:18:14.381578 containerd[1466]: time="2024-06-26T07:18:14.381480749Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\"" Jun 26 07:18:14.386650 containerd[1466]: time="2024-06-26T07:18:14.384276476Z" level=info msg="StartContainer for \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\"" Jun 26 07:18:14.485518 systemd[1]: Started cri-containerd-f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e.scope - libcontainer container f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e. Jun 26 07:18:14.609323 systemd[1]: cri-containerd-f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e.scope: Deactivated successfully. Jun 26 07:18:14.790821 containerd[1466]: time="2024-06-26T07:18:14.790438667Z" level=info msg="StartContainer for \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\" returns successfully" Jun 26 07:18:14.886052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e-rootfs.mount: Deactivated successfully. Jun 26 07:18:15.141492 containerd[1466]: time="2024-06-26T07:18:15.140809452Z" level=info msg="shim disconnected" id=f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e namespace=k8s.io Jun 26 07:18:15.141492 containerd[1466]: time="2024-06-26T07:18:15.140893088Z" level=warning msg="cleaning up after shim disconnected" id=f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e namespace=k8s.io Jun 26 07:18:15.141492 containerd[1466]: time="2024-06-26T07:18:15.140906926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:15.184436 kubelet[2610]: E0626 07:18:15.183595 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:15.750566 containerd[1466]: time="2024-06-26T07:18:15.750490578Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:15.754497 containerd[1466]: time="2024-06-26T07:18:15.753780274Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Jun 26 07:18:15.759868 containerd[1466]: time="2024-06-26T07:18:15.759717814Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 26 07:18:15.765918 containerd[1466]: time="2024-06-26T07:18:15.765504235Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.115424608s" Jun 26 07:18:15.765918 containerd[1466]: time="2024-06-26T07:18:15.765596176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 26 07:18:15.774133 containerd[1466]: time="2024-06-26T07:18:15.773283269Z" level=info msg="CreateContainer within sandbox \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 26 07:18:15.918268 containerd[1466]: time="2024-06-26T07:18:15.918169380Z" level=info msg="CreateContainer within sandbox \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\"" Jun 26 07:18:15.921539 containerd[1466]: time="2024-06-26T07:18:15.919015463Z" level=info msg="StartContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\"" Jun 26 07:18:16.002607 systemd[1]: Started cri-containerd-b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f.scope - libcontainer container b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f. Jun 26 07:18:16.097192 containerd[1466]: time="2024-06-26T07:18:16.097118501Z" level=info msg="StartContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" returns successfully" Jun 26 07:18:16.200840 kubelet[2610]: E0626 07:18:16.198883 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:16.206533 kubelet[2610]: E0626 07:18:16.206491 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:16.214863 containerd[1466]: time="2024-06-26T07:18:16.213517652Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 26 07:18:16.310328 kubelet[2610]: I0626 07:18:16.308691 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zsr72" podStartSLOduration=1.92021598 podStartE2EDuration="26.308618248s" podCreationTimestamp="2024-06-26 07:17:50 +0000 UTC" firstStartedPulling="2024-06-26 07:17:51.378215377 +0000 UTC m=+12.355318833" lastFinishedPulling="2024-06-26 07:18:15.766617631 +0000 UTC m=+36.743721101" observedRunningTime="2024-06-26 07:18:16.245460837 +0000 UTC m=+37.222564308" watchObservedRunningTime="2024-06-26 07:18:16.308618248 +0000 UTC m=+37.285721728" Jun 26 07:18:16.371012 containerd[1466]: time="2024-06-26T07:18:16.370931525Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\"" Jun 26 07:18:16.374435 containerd[1466]: time="2024-06-26T07:18:16.374383944Z" level=info msg="StartContainer for \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\"" Jun 26 07:18:16.496372 systemd[1]: Started cri-containerd-5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df.scope - libcontainer container 5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df. Jun 26 07:18:16.573280 containerd[1466]: time="2024-06-26T07:18:16.572273435Z" level=info msg="StartContainer for \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\" returns successfully" Jun 26 07:18:16.572566 systemd[1]: cri-containerd-5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df.scope: Deactivated successfully. Jun 26 07:18:16.660226 containerd[1466]: time="2024-06-26T07:18:16.658874369Z" level=info msg="shim disconnected" id=5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df namespace=k8s.io Jun 26 07:18:16.660226 containerd[1466]: time="2024-06-26T07:18:16.659088484Z" level=warning msg="cleaning up after shim disconnected" id=5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df namespace=k8s.io Jun 26 07:18:16.660226 containerd[1466]: time="2024-06-26T07:18:16.659115992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:18:16.698165 containerd[1466]: time="2024-06-26T07:18:16.696460783Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:18:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:18:16.949262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df-rootfs.mount: Deactivated successfully. Jun 26 07:18:17.229463 kubelet[2610]: E0626 07:18:17.228113 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:17.231097 kubelet[2610]: E0626 07:18:17.230566 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:17.237440 containerd[1466]: time="2024-06-26T07:18:17.237373303Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 26 07:18:17.305951 containerd[1466]: time="2024-06-26T07:18:17.305839507Z" level=info msg="CreateContainer within sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\"" Jun 26 07:18:17.306880 containerd[1466]: time="2024-06-26T07:18:17.306829649Z" level=info msg="StartContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\"" Jun 26 07:18:17.311742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714608865.mount: Deactivated successfully. Jun 26 07:18:17.409397 systemd[1]: Started cri-containerd-b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535.scope - libcontainer container b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535. Jun 26 07:18:17.536708 containerd[1466]: time="2024-06-26T07:18:17.534501421Z" level=info msg="StartContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" returns successfully" Jun 26 07:18:18.053113 kubelet[2610]: I0626 07:18:18.052615 2610 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 26 07:18:18.200284 kubelet[2610]: I0626 07:18:18.199630 2610 topology_manager.go:215] "Topology Admit Handler" podUID="53a2d4ef-7e51-422a-9344-b5a5da804fb2" podNamespace="kube-system" podName="coredns-76f75df574-9p7vk" Jun 26 07:18:18.208563 kubelet[2610]: I0626 07:18:18.207931 2610 topology_manager.go:215] "Topology Admit Handler" podUID="84be4be2-9405-41f0-9cfe-b97c7e3eef6e" podNamespace="kube-system" podName="coredns-76f75df574-hwn29" Jun 26 07:18:18.219360 systemd[1]: Created slice kubepods-burstable-pod53a2d4ef_7e51_422a_9344_b5a5da804fb2.slice - libcontainer container kubepods-burstable-pod53a2d4ef_7e51_422a_9344_b5a5da804fb2.slice. Jun 26 07:18:18.250122 systemd[1]: Created slice kubepods-burstable-pod84be4be2_9405_41f0_9cfe_b97c7e3eef6e.slice - libcontainer container kubepods-burstable-pod84be4be2_9405_41f0_9cfe_b97c7e3eef6e.slice. Jun 26 07:18:18.275073 kubelet[2610]: E0626 07:18:18.273597 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:18.292054 kubelet[2610]: I0626 07:18:18.291704 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnn8\" (UniqueName: \"kubernetes.io/projected/53a2d4ef-7e51-422a-9344-b5a5da804fb2-kube-api-access-2vnn8\") pod \"coredns-76f75df574-9p7vk\" (UID: \"53a2d4ef-7e51-422a-9344-b5a5da804fb2\") " pod="kube-system/coredns-76f75df574-9p7vk" Jun 26 07:18:18.292054 kubelet[2610]: I0626 07:18:18.291785 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a2d4ef-7e51-422a-9344-b5a5da804fb2-config-volume\") pod \"coredns-76f75df574-9p7vk\" (UID: \"53a2d4ef-7e51-422a-9344-b5a5da804fb2\") " pod="kube-system/coredns-76f75df574-9p7vk" Jun 26 07:18:18.292054 kubelet[2610]: I0626 07:18:18.291923 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rdb\" (UniqueName: \"kubernetes.io/projected/84be4be2-9405-41f0-9cfe-b97c7e3eef6e-kube-api-access-79rdb\") pod \"coredns-76f75df574-hwn29\" (UID: \"84be4be2-9405-41f0-9cfe-b97c7e3eef6e\") " pod="kube-system/coredns-76f75df574-hwn29" Jun 26 07:18:18.292447 kubelet[2610]: I0626 07:18:18.292106 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84be4be2-9405-41f0-9cfe-b97c7e3eef6e-config-volume\") pod \"coredns-76f75df574-hwn29\" (UID: \"84be4be2-9405-41f0-9cfe-b97c7e3eef6e\") " pod="kube-system/coredns-76f75df574-hwn29" Jun 26 07:18:18.368519 kubelet[2610]: I0626 07:18:18.367197 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kzfjf" podStartSLOduration=7.650070111 podStartE2EDuration="28.367119442s" podCreationTimestamp="2024-06-26 07:17:50 +0000 UTC" firstStartedPulling="2024-06-26 07:17:50.931606186 +0000 UTC m=+11.908709654" lastFinishedPulling="2024-06-26 07:18:11.648655522 +0000 UTC m=+32.625758985" observedRunningTime="2024-06-26 07:18:18.3283425 +0000 UTC m=+39.305445977" watchObservedRunningTime="2024-06-26 07:18:18.367119442 +0000 UTC m=+39.344222919" Jun 26 07:18:18.535431 kubelet[2610]: E0626 07:18:18.535090 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:18.536687 containerd[1466]: time="2024-06-26T07:18:18.536591064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7vk,Uid:53a2d4ef-7e51-422a-9344-b5a5da804fb2,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:18.567717 kubelet[2610]: E0626 07:18:18.567330 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:18.568334 containerd[1466]: time="2024-06-26T07:18:18.568239539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwn29,Uid:84be4be2-9405-41f0-9cfe-b97c7e3eef6e,Namespace:kube-system,Attempt:0,}" Jun 26 07:18:19.264116 kubelet[2610]: E0626 07:18:19.264058 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:20.273715 kubelet[2610]: E0626 07:18:20.273581 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:21.014743 systemd-networkd[1373]: cilium_host: Link UP Jun 26 07:18:21.023153 systemd-networkd[1373]: cilium_net: Link UP Jun 26 07:18:21.023745 systemd-networkd[1373]: cilium_net: Gained carrier Jun 26 07:18:21.024230 systemd-networkd[1373]: cilium_host: Gained carrier Jun 26 07:18:21.273976 systemd-networkd[1373]: cilium_host: Gained IPv6LL Jun 26 07:18:21.607260 systemd-networkd[1373]: cilium_vxlan: Link UP Jun 26 07:18:21.611153 systemd-networkd[1373]: cilium_vxlan: Gained carrier Jun 26 07:18:22.021370 systemd-networkd[1373]: cilium_net: Gained IPv6LL Jun 26 07:18:22.856528 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Jun 26 07:18:23.166985 kernel: NET: Registered PF_ALG protocol family Jun 26 07:18:25.685606 systemd-networkd[1373]: lxc_health: Link UP Jun 26 07:18:25.689680 systemd-networkd[1373]: lxc_health: Gained carrier Jun 26 07:18:26.338824 systemd-networkd[1373]: lxc9ad7b0c8f09f: Link UP Jun 26 07:18:26.346075 kernel: eth0: renamed from tmpa4b3b Jun 26 07:18:26.356580 systemd-networkd[1373]: lxc22aafc2f6b23: Link UP Jun 26 07:18:26.359669 systemd-networkd[1373]: lxc9ad7b0c8f09f: Gained carrier Jun 26 07:18:26.376585 kernel: eth0: renamed from tmp9289c Jun 26 07:18:26.391701 systemd-networkd[1373]: lxc22aafc2f6b23: Gained carrier Jun 26 07:18:26.566949 kubelet[2610]: E0626 07:18:26.566889 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:27.375000 kubelet[2610]: E0626 07:18:27.374941 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:27.717593 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jun 26 07:18:27.783185 systemd-networkd[1373]: lxc22aafc2f6b23: Gained IPv6LL Jun 26 07:18:27.974360 systemd-networkd[1373]: lxc9ad7b0c8f09f: Gained IPv6LL Jun 26 07:18:28.378552 kubelet[2610]: E0626 07:18:28.378352 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:34.176346 systemd[1]: Started sshd@9-146.190.154.167:22-147.75.109.163:55110.service - OpenSSH per-connection server daemon (147.75.109.163:55110). Jun 26 07:18:34.330083 sshd[3813]: Accepted publickey for core from 147.75.109.163 port 55110 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:34.330566 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:34.350852 systemd-logind[1448]: New session 10 of user core. Jun 26 07:18:34.355483 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 26 07:18:35.446761 sshd[3813]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:35.458863 systemd[1]: sshd@9-146.190.154.167:22-147.75.109.163:55110.service: Deactivated successfully. Jun 26 07:18:35.460378 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jun 26 07:18:35.466174 systemd[1]: session-10.scope: Deactivated successfully. Jun 26 07:18:35.473049 systemd-logind[1448]: Removed session 10. Jun 26 07:18:36.588287 containerd[1466]: time="2024-06-26T07:18:36.588096806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:36.590924 containerd[1466]: time="2024-06-26T07:18:36.590708576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:36.590924 containerd[1466]: time="2024-06-26T07:18:36.590800186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:36.590924 containerd[1466]: time="2024-06-26T07:18:36.590819683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:36.678631 systemd[1]: run-containerd-runc-k8s.io-9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85-runc.SkAFbd.mount: Deactivated successfully. Jun 26 07:18:36.701425 systemd[1]: Started cri-containerd-9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85.scope - libcontainer container 9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85. Jun 26 07:18:36.808204 containerd[1466]: time="2024-06-26T07:18:36.807600174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:18:36.808204 containerd[1466]: time="2024-06-26T07:18:36.807742989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:36.808204 containerd[1466]: time="2024-06-26T07:18:36.807786920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:18:36.808204 containerd[1466]: time="2024-06-26T07:18:36.807811925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:18:36.887260 containerd[1466]: time="2024-06-26T07:18:36.886884586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwn29,Uid:84be4be2-9405-41f0-9cfe-b97c7e3eef6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85\"" Jun 26 07:18:36.890375 systemd[1]: Started cri-containerd-a4b3bc9b781bb0a7799172b97f4e42d7063ea27021b6bcfaa9fab43f7a1c6be7.scope - libcontainer container a4b3bc9b781bb0a7799172b97f4e42d7063ea27021b6bcfaa9fab43f7a1c6be7. Jun 26 07:18:36.897104 kubelet[2610]: E0626 07:18:36.893454 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:36.910603 containerd[1466]: time="2024-06-26T07:18:36.910096828Z" level=info msg="CreateContainer within sandbox \"9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:18:36.969248 containerd[1466]: time="2024-06-26T07:18:36.968634712Z" level=info msg="CreateContainer within sandbox \"9289cf75f53b9486cd01d01a12ab185619f722c80c402e6f7b32da600d208e85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"686c8e13f98b2d3c2c1d64d12a1219d00fad2c72e00107f6a91aced5029ebe1f\"" Jun 26 07:18:36.972719 containerd[1466]: time="2024-06-26T07:18:36.972505538Z" level=info msg="StartContainer for \"686c8e13f98b2d3c2c1d64d12a1219d00fad2c72e00107f6a91aced5029ebe1f\"" Jun 26 07:18:37.043589 containerd[1466]: time="2024-06-26T07:18:37.043506802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7vk,Uid:53a2d4ef-7e51-422a-9344-b5a5da804fb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4b3bc9b781bb0a7799172b97f4e42d7063ea27021b6bcfaa9fab43f7a1c6be7\"" Jun 26 07:18:37.048087 kubelet[2610]: E0626 07:18:37.046377 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:37.063432 containerd[1466]: time="2024-06-26T07:18:37.063345672Z" level=info msg="CreateContainer within sandbox \"a4b3bc9b781bb0a7799172b97f4e42d7063ea27021b6bcfaa9fab43f7a1c6be7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 26 07:18:37.066398 systemd[1]: Started cri-containerd-686c8e13f98b2d3c2c1d64d12a1219d00fad2c72e00107f6a91aced5029ebe1f.scope - libcontainer container 686c8e13f98b2d3c2c1d64d12a1219d00fad2c72e00107f6a91aced5029ebe1f. Jun 26 07:18:37.129812 containerd[1466]: time="2024-06-26T07:18:37.129733137Z" level=info msg="CreateContainer within sandbox \"a4b3bc9b781bb0a7799172b97f4e42d7063ea27021b6bcfaa9fab43f7a1c6be7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d58f01e2672bba51ff6ce0774c09e0f327af6b9cfd21df6041e62e8b1be867d2\"" Jun 26 07:18:37.133507 containerd[1466]: time="2024-06-26T07:18:37.132634033Z" level=info msg="StartContainer for \"d58f01e2672bba51ff6ce0774c09e0f327af6b9cfd21df6041e62e8b1be867d2\"" Jun 26 07:18:37.155170 containerd[1466]: time="2024-06-26T07:18:37.154403512Z" level=info msg="StartContainer for \"686c8e13f98b2d3c2c1d64d12a1219d00fad2c72e00107f6a91aced5029ebe1f\" returns successfully" Jun 26 07:18:37.213398 systemd[1]: Started cri-containerd-d58f01e2672bba51ff6ce0774c09e0f327af6b9cfd21df6041e62e8b1be867d2.scope - libcontainer container d58f01e2672bba51ff6ce0774c09e0f327af6b9cfd21df6041e62e8b1be867d2. Jun 26 07:18:37.280749 containerd[1466]: time="2024-06-26T07:18:37.280673798Z" level=info msg="StartContainer for \"d58f01e2672bba51ff6ce0774c09e0f327af6b9cfd21df6041e62e8b1be867d2\" returns successfully" Jun 26 07:18:37.417806 kubelet[2610]: E0626 07:18:37.417408 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:37.422812 kubelet[2610]: E0626 07:18:37.422724 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:37.507094 kubelet[2610]: I0626 07:18:37.507011 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9p7vk" podStartSLOduration=47.506945279 podStartE2EDuration="47.506945279s" podCreationTimestamp="2024-06-26 07:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:37.468655632 +0000 UTC m=+58.445759107" watchObservedRunningTime="2024-06-26 07:18:37.506945279 +0000 UTC m=+58.484048790" Jun 26 07:18:38.425075 kubelet[2610]: E0626 07:18:38.424979 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:38.425686 kubelet[2610]: E0626 07:18:38.424988 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:38.452819 kubelet[2610]: I0626 07:18:38.452063 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hwn29" podStartSLOduration=48.451960587 podStartE2EDuration="48.451960587s" podCreationTimestamp="2024-06-26 07:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:18:37.509454302 +0000 UTC m=+58.486557792" watchObservedRunningTime="2024-06-26 07:18:38.451960587 +0000 UTC m=+59.429064066" Jun 26 07:18:39.428876 kubelet[2610]: E0626 07:18:39.428790 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:39.429597 kubelet[2610]: E0626 07:18:39.429553 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:18:40.473649 systemd[1]: Started sshd@10-146.190.154.167:22-147.75.109.163:51088.service - OpenSSH per-connection server daemon (147.75.109.163:51088). Jun 26 07:18:40.604093 sshd[4005]: Accepted publickey for core from 147.75.109.163 port 51088 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:40.608139 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:40.622190 systemd-logind[1448]: New session 11 of user core. Jun 26 07:18:40.633565 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 26 07:18:40.957404 sshd[4005]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:40.964236 systemd[1]: sshd@10-146.190.154.167:22-147.75.109.163:51088.service: Deactivated successfully. Jun 26 07:18:40.968621 systemd[1]: session-11.scope: Deactivated successfully. Jun 26 07:18:40.975209 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jun 26 07:18:40.977578 systemd-logind[1448]: Removed session 11. Jun 26 07:18:44.313294 systemd[1]: Started sshd@11-146.190.154.167:22-184.168.122.184:40992.service - OpenSSH per-connection server daemon (184.168.122.184:40992). Jun 26 07:18:45.325362 sshd[4019]: Invalid user administrator from 184.168.122.184 port 40992 Jun 26 07:18:45.513591 sshd[4019]: Received disconnect from 184.168.122.184 port 40992:11: Bye Bye [preauth] Jun 26 07:18:45.513591 sshd[4019]: Disconnected from invalid user administrator 184.168.122.184 port 40992 [preauth] Jun 26 07:18:45.517125 systemd[1]: sshd@11-146.190.154.167:22-184.168.122.184:40992.service: Deactivated successfully. Jun 26 07:18:45.998954 systemd[1]: Started sshd@12-146.190.154.167:22-147.75.109.163:47078.service - OpenSSH per-connection server daemon (147.75.109.163:47078). Jun 26 07:18:46.075079 sshd[4024]: Accepted publickey for core from 147.75.109.163 port 47078 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:46.078614 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:46.102658 systemd-logind[1448]: New session 12 of user core. Jun 26 07:18:46.112455 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 26 07:18:46.389865 sshd[4024]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:46.403863 systemd[1]: sshd@12-146.190.154.167:22-147.75.109.163:47078.service: Deactivated successfully. Jun 26 07:18:46.412587 systemd[1]: session-12.scope: Deactivated successfully. Jun 26 07:18:46.418009 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jun 26 07:18:46.423902 systemd-logind[1448]: Removed session 12. Jun 26 07:18:51.419895 systemd[1]: Started sshd@13-146.190.154.167:22-147.75.109.163:47084.service - OpenSSH per-connection server daemon (147.75.109.163:47084). Jun 26 07:18:51.512552 sshd[4037]: Accepted publickey for core from 147.75.109.163 port 47084 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:51.516309 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:51.525320 systemd-logind[1448]: New session 13 of user core. Jun 26 07:18:51.535599 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 26 07:18:51.908804 sshd[4037]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:51.933773 systemd[1]: sshd@13-146.190.154.167:22-147.75.109.163:47084.service: Deactivated successfully. Jun 26 07:18:51.938350 systemd[1]: session-13.scope: Deactivated successfully. Jun 26 07:18:51.950004 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jun 26 07:18:51.963597 systemd[1]: Started sshd@14-146.190.154.167:22-147.75.109.163:47088.service - OpenSSH per-connection server daemon (147.75.109.163:47088). Jun 26 07:18:51.969148 systemd-logind[1448]: Removed session 13. Jun 26 07:18:52.073869 sshd[4053]: Accepted publickey for core from 147.75.109.163 port 47088 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:52.078215 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:52.095581 systemd-logind[1448]: New session 14 of user core. Jun 26 07:18:52.104010 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 26 07:18:52.571032 sshd[4053]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:52.594410 systemd[1]: sshd@14-146.190.154.167:22-147.75.109.163:47088.service: Deactivated successfully. Jun 26 07:18:52.604033 systemd[1]: session-14.scope: Deactivated successfully. Jun 26 07:18:52.607122 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jun 26 07:18:52.626444 systemd[1]: Started sshd@15-146.190.154.167:22-147.75.109.163:47102.service - OpenSSH per-connection server daemon (147.75.109.163:47102). Jun 26 07:18:52.636617 systemd-logind[1448]: Removed session 14. Jun 26 07:18:52.827840 sshd[4064]: Accepted publickey for core from 147.75.109.163 port 47102 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:52.832910 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:52.849212 systemd-logind[1448]: New session 15 of user core. Jun 26 07:18:52.855903 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 26 07:18:53.162651 sshd[4064]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:53.173521 systemd[1]: sshd@15-146.190.154.167:22-147.75.109.163:47102.service: Deactivated successfully. Jun 26 07:18:53.179325 systemd[1]: session-15.scope: Deactivated successfully. Jun 26 07:18:53.183013 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jun 26 07:18:53.191071 systemd-logind[1448]: Removed session 15. Jun 26 07:18:58.181614 systemd[1]: Started sshd@16-146.190.154.167:22-147.75.109.163:54532.service - OpenSSH per-connection server daemon (147.75.109.163:54532). Jun 26 07:18:58.249113 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 54532 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:18:58.252944 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:18:58.263596 systemd-logind[1448]: New session 16 of user core. Jun 26 07:18:58.269888 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 26 07:18:58.477101 sshd[4077]: pam_unix(sshd:session): session closed for user core Jun 26 07:18:58.483864 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jun 26 07:18:58.484815 systemd[1]: sshd@16-146.190.154.167:22-147.75.109.163:54532.service: Deactivated successfully. Jun 26 07:18:58.491560 systemd[1]: session-16.scope: Deactivated successfully. Jun 26 07:18:58.498359 systemd-logind[1448]: Removed session 16. Jun 26 07:19:03.509574 systemd[1]: Started sshd@17-146.190.154.167:22-147.75.109.163:54548.service - OpenSSH per-connection server daemon (147.75.109.163:54548). Jun 26 07:19:03.563670 kubelet[2610]: E0626 07:19:03.563166 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:03.616171 sshd[4090]: Accepted publickey for core from 147.75.109.163 port 54548 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:03.624148 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:03.678600 systemd-logind[1448]: New session 17 of user core. Jun 26 07:19:03.690744 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 26 07:19:04.029989 sshd[4090]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:04.045020 systemd[1]: sshd@17-146.190.154.167:22-147.75.109.163:54548.service: Deactivated successfully. Jun 26 07:19:04.054155 systemd[1]: session-17.scope: Deactivated successfully. Jun 26 07:19:04.062676 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jun 26 07:19:04.072392 systemd-logind[1448]: Removed session 17. Jun 26 07:19:07.562702 kubelet[2610]: E0626 07:19:07.561742 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:09.056458 systemd[1]: Started sshd@18-146.190.154.167:22-147.75.109.163:50760.service - OpenSSH per-connection server daemon (147.75.109.163:50760). Jun 26 07:19:09.148440 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 50760 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:09.152352 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:09.170668 systemd-logind[1448]: New session 18 of user core. Jun 26 07:19:09.181578 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 26 07:19:09.476085 sshd[4103]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:09.506751 systemd[1]: sshd@18-146.190.154.167:22-147.75.109.163:50760.service: Deactivated successfully. Jun 26 07:19:09.510754 systemd[1]: session-18.scope: Deactivated successfully. Jun 26 07:19:09.518442 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jun 26 07:19:09.540337 systemd[1]: Started sshd@19-146.190.154.167:22-147.75.109.163:50770.service - OpenSSH per-connection server daemon (147.75.109.163:50770). Jun 26 07:19:09.557275 systemd-logind[1448]: Removed session 18. Jun 26 07:19:09.628949 sshd[4116]: Accepted publickey for core from 147.75.109.163 port 50770 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:09.635006 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:09.652652 systemd-logind[1448]: New session 19 of user core. Jun 26 07:19:09.664736 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 26 07:19:10.816483 sshd[4116]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:10.918022 systemd[1]: Started sshd@20-146.190.154.167:22-147.75.109.163:50786.service - OpenSSH per-connection server daemon (147.75.109.163:50786). Jun 26 07:19:10.918939 systemd[1]: sshd@19-146.190.154.167:22-147.75.109.163:50770.service: Deactivated successfully. Jun 26 07:19:10.938732 systemd[1]: session-19.scope: Deactivated successfully. Jun 26 07:19:10.944821 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jun 26 07:19:10.949359 systemd-logind[1448]: Removed session 19. Jun 26 07:19:11.134280 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 50786 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:11.138833 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:11.150547 systemd-logind[1448]: New session 20 of user core. Jun 26 07:19:11.161271 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 26 07:19:13.572103 kubelet[2610]: E0626 07:19:13.569783 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:14.473168 sshd[4125]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:14.493539 systemd[1]: sshd@20-146.190.154.167:22-147.75.109.163:50786.service: Deactivated successfully. Jun 26 07:19:14.500192 systemd[1]: session-20.scope: Deactivated successfully. Jun 26 07:19:14.502899 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jun 26 07:19:14.518755 systemd[1]: Started sshd@21-146.190.154.167:22-147.75.109.163:50790.service - OpenSSH per-connection server daemon (147.75.109.163:50790). Jun 26 07:19:14.530952 systemd-logind[1448]: Removed session 20. Jun 26 07:19:14.634422 sshd[4144]: Accepted publickey for core from 147.75.109.163 port 50790 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:14.636378 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:14.648073 systemd-logind[1448]: New session 21 of user core. Jun 26 07:19:14.670865 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 26 07:19:14.678553 systemd[1]: Started sshd@22-146.190.154.167:22-152.89.198.106:23019.service - OpenSSH per-connection server daemon (152.89.198.106:23019). Jun 26 07:19:15.462399 sshd[4144]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:15.479702 systemd[1]: sshd@21-146.190.154.167:22-147.75.109.163:50790.service: Deactivated successfully. Jun 26 07:19:15.486880 systemd[1]: session-21.scope: Deactivated successfully. Jun 26 07:19:15.492584 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jun 26 07:19:15.501095 systemd[1]: Started sshd@23-146.190.154.167:22-147.75.109.163:50806.service - OpenSSH per-connection server daemon (147.75.109.163:50806). Jun 26 07:19:15.510078 systemd-logind[1448]: Removed session 21. Jun 26 07:19:15.611958 sshd[4158]: Accepted publickey for core from 147.75.109.163 port 50806 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:15.620343 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:15.643982 systemd-logind[1448]: New session 22 of user core. Jun 26 07:19:15.648393 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 26 07:19:15.845526 sshd[4148]: Received disconnect from 152.89.198.106 port 23019:11: Client disconnecting normally [preauth] Jun 26 07:19:15.845526 sshd[4148]: Disconnected from authenticating user root 152.89.198.106 port 23019 [preauth] Jun 26 07:19:15.847708 systemd[1]: sshd@22-146.190.154.167:22-152.89.198.106:23019.service: Deactivated successfully. Jun 26 07:19:15.886794 sshd[4158]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:15.897620 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jun 26 07:19:15.901603 systemd[1]: sshd@23-146.190.154.167:22-147.75.109.163:50806.service: Deactivated successfully. Jun 26 07:19:15.906580 systemd[1]: session-22.scope: Deactivated successfully. Jun 26 07:19:15.914353 systemd-logind[1448]: Removed session 22. Jun 26 07:19:16.562118 kubelet[2610]: E0626 07:19:16.561785 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:20.908797 systemd[1]: Started sshd@24-146.190.154.167:22-147.75.109.163:44664.service - OpenSSH per-connection server daemon (147.75.109.163:44664). Jun 26 07:19:20.988241 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 44664 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:20.992432 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:21.003770 systemd-logind[1448]: New session 23 of user core. Jun 26 07:19:21.013550 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 26 07:19:21.208361 sshd[4177]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:21.214824 systemd[1]: sshd@24-146.190.154.167:22-147.75.109.163:44664.service: Deactivated successfully. Jun 26 07:19:21.221872 systemd[1]: session-23.scope: Deactivated successfully. Jun 26 07:19:21.226319 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jun 26 07:19:21.228572 systemd-logind[1448]: Removed session 23. Jun 26 07:19:26.236908 systemd[1]: Started sshd@25-146.190.154.167:22-147.75.109.163:56656.service - OpenSSH per-connection server daemon (147.75.109.163:56656). Jun 26 07:19:26.313855 sshd[4192]: Accepted publickey for core from 147.75.109.163 port 56656 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:26.318210 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:26.335414 systemd-logind[1448]: New session 24 of user core. Jun 26 07:19:26.343627 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 26 07:19:26.556750 sshd[4192]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:26.568232 systemd[1]: sshd@25-146.190.154.167:22-147.75.109.163:56656.service: Deactivated successfully. Jun 26 07:19:26.573021 systemd[1]: session-24.scope: Deactivated successfully. Jun 26 07:19:26.580430 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jun 26 07:19:26.584577 systemd-logind[1448]: Removed session 24. Jun 26 07:19:31.585603 systemd[1]: Started sshd@26-146.190.154.167:22-147.75.109.163:56658.service - OpenSSH per-connection server daemon (147.75.109.163:56658). Jun 26 07:19:31.656276 sshd[4205]: Accepted publickey for core from 147.75.109.163 port 56658 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:31.660144 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:31.672264 systemd-logind[1448]: New session 25 of user core. Jun 26 07:19:31.677512 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 26 07:19:31.875683 sshd[4205]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:31.883387 systemd[1]: sshd@26-146.190.154.167:22-147.75.109.163:56658.service: Deactivated successfully. Jun 26 07:19:31.888012 systemd[1]: session-25.scope: Deactivated successfully. Jun 26 07:19:31.890294 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jun 26 07:19:31.899539 systemd-logind[1448]: Removed session 25. Jun 26 07:19:36.563769 kubelet[2610]: E0626 07:19:36.561827 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:36.898055 systemd[1]: Started sshd@27-146.190.154.167:22-147.75.109.163:42090.service - OpenSSH per-connection server daemon (147.75.109.163:42090). Jun 26 07:19:36.980154 sshd[4218]: Accepted publickey for core from 147.75.109.163 port 42090 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:36.983209 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:36.997248 systemd-logind[1448]: New session 26 of user core. Jun 26 07:19:37.001424 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 26 07:19:37.241463 sshd[4218]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:37.248788 systemd[1]: sshd@27-146.190.154.167:22-147.75.109.163:42090.service: Deactivated successfully. Jun 26 07:19:37.253019 systemd[1]: session-26.scope: Deactivated successfully. Jun 26 07:19:37.268497 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jun 26 07:19:37.271178 systemd-logind[1448]: Removed session 26. Jun 26 07:19:42.273737 systemd[1]: Started sshd@28-146.190.154.167:22-147.75.109.163:42102.service - OpenSSH per-connection server daemon (147.75.109.163:42102). Jun 26 07:19:42.388094 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 42102 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:42.397666 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:42.440257 systemd-logind[1448]: New session 27 of user core. Jun 26 07:19:42.447270 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 26 07:19:42.808358 sshd[4233]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:42.824620 systemd[1]: sshd@28-146.190.154.167:22-147.75.109.163:42102.service: Deactivated successfully. Jun 26 07:19:42.829731 systemd[1]: session-27.scope: Deactivated successfully. Jun 26 07:19:42.844915 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jun 26 07:19:42.848631 systemd-logind[1448]: Removed session 27. Jun 26 07:19:46.564655 kubelet[2610]: E0626 07:19:46.564561 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:47.837365 systemd[1]: Started sshd@29-146.190.154.167:22-147.75.109.163:57996.service - OpenSSH per-connection server daemon (147.75.109.163:57996). Jun 26 07:19:47.906355 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 57996 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:47.909851 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:47.924900 systemd-logind[1448]: New session 28 of user core. Jun 26 07:19:47.944794 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 26 07:19:48.188750 sshd[4246]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:48.202932 systemd[1]: sshd@29-146.190.154.167:22-147.75.109.163:57996.service: Deactivated successfully. Jun 26 07:19:48.207025 systemd[1]: session-28.scope: Deactivated successfully. Jun 26 07:19:48.210356 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jun 26 07:19:48.221912 systemd[1]: Started sshd@30-146.190.154.167:22-147.75.109.163:57998.service - OpenSSH per-connection server daemon (147.75.109.163:57998). Jun 26 07:19:48.226570 systemd-logind[1448]: Removed session 28. Jun 26 07:19:48.299971 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 57998 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:48.303180 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:48.318084 systemd-logind[1448]: New session 29 of user core. Jun 26 07:19:48.322529 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 26 07:19:50.418215 containerd[1466]: time="2024-06-26T07:19:50.417947510Z" level=info msg="StopContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" with timeout 30 (s)" Jun 26 07:19:50.428262 containerd[1466]: time="2024-06-26T07:19:50.428150726Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 26 07:19:50.435942 containerd[1466]: time="2024-06-26T07:19:50.435825762Z" level=info msg="StopContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" with timeout 2 (s)" Jun 26 07:19:50.438166 containerd[1466]: time="2024-06-26T07:19:50.438110860Z" level=info msg="Stop container \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" with signal terminated" Jun 26 07:19:50.440844 containerd[1466]: time="2024-06-26T07:19:50.440732719Z" level=info msg="Stop container \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" with signal terminated" Jun 26 07:19:50.466669 systemd-networkd[1373]: lxc_health: Link DOWN Jun 26 07:19:50.469748 systemd-networkd[1373]: lxc_health: Lost carrier Jun 26 07:19:50.517710 systemd[1]: cri-containerd-b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f.scope: Deactivated successfully. Jun 26 07:19:50.520338 systemd[1]: cri-containerd-b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535.scope: Deactivated successfully. Jun 26 07:19:50.520926 systemd[1]: cri-containerd-b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535.scope: Consumed 15.070s CPU time. Jun 26 07:19:50.591995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f-rootfs.mount: Deactivated successfully. Jun 26 07:19:50.604494 containerd[1466]: time="2024-06-26T07:19:50.604084370Z" level=info msg="shim disconnected" id=b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f namespace=k8s.io Jun 26 07:19:50.604494 containerd[1466]: time="2024-06-26T07:19:50.604296787Z" level=warning msg="cleaning up after shim disconnected" id=b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f namespace=k8s.io Jun 26 07:19:50.604494 containerd[1466]: time="2024-06-26T07:19:50.604320283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:50.608781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535-rootfs.mount: Deactivated successfully. Jun 26 07:19:50.616591 containerd[1466]: time="2024-06-26T07:19:50.612991442Z" level=info msg="shim disconnected" id=b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535 namespace=k8s.io Jun 26 07:19:50.616591 containerd[1466]: time="2024-06-26T07:19:50.616204160Z" level=warning msg="cleaning up after shim disconnected" id=b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535 namespace=k8s.io Jun 26 07:19:50.616591 containerd[1466]: time="2024-06-26T07:19:50.616234637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:50.661267 containerd[1466]: time="2024-06-26T07:19:50.660987870Z" level=warning msg="cleanup warnings time=\"2024-06-26T07:19:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 26 07:19:50.683288 containerd[1466]: time="2024-06-26T07:19:50.683189835Z" level=info msg="StopContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" returns successfully" Jun 26 07:19:50.685072 containerd[1466]: time="2024-06-26T07:19:50.684446954Z" level=info msg="StopPodSandbox for \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\"" Jun 26 07:19:50.687991 containerd[1466]: time="2024-06-26T07:19:50.684600921Z" level=info msg="Container to stop \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.687991 containerd[1466]: time="2024-06-26T07:19:50.687190826Z" level=info msg="Container to stop \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.687991 containerd[1466]: time="2024-06-26T07:19:50.687247435Z" level=info msg="Container to stop \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.687991 containerd[1466]: time="2024-06-26T07:19:50.687264323Z" level=info msg="Container to stop \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.687991 containerd[1466]: time="2024-06-26T07:19:50.687278871Z" level=info msg="Container to stop \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.689735 containerd[1466]: time="2024-06-26T07:19:50.689677476Z" level=info msg="StopContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" returns successfully" Jun 26 07:19:50.690576 containerd[1466]: time="2024-06-26T07:19:50.690537521Z" level=info msg="StopPodSandbox for \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\"" Jun 26 07:19:50.691584 containerd[1466]: time="2024-06-26T07:19:50.691307271Z" level=info msg="Container to stop \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 26 07:19:50.699421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa-shm.mount: Deactivated successfully. Jun 26 07:19:50.710302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b-shm.mount: Deactivated successfully. Jun 26 07:19:50.716874 systemd[1]: cri-containerd-479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa.scope: Deactivated successfully. Jun 26 07:19:50.732548 systemd[1]: cri-containerd-9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b.scope: Deactivated successfully. Jun 26 07:19:50.805477 containerd[1466]: time="2024-06-26T07:19:50.803684009Z" level=info msg="shim disconnected" id=479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa namespace=k8s.io Jun 26 07:19:50.805477 containerd[1466]: time="2024-06-26T07:19:50.805164159Z" level=warning msg="cleaning up after shim disconnected" id=479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa namespace=k8s.io Jun 26 07:19:50.805477 containerd[1466]: time="2024-06-26T07:19:50.805192342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:50.808086 containerd[1466]: time="2024-06-26T07:19:50.807356745Z" level=info msg="shim disconnected" id=9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b namespace=k8s.io Jun 26 07:19:50.808086 containerd[1466]: time="2024-06-26T07:19:50.807465345Z" level=warning msg="cleaning up after shim disconnected" id=9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b namespace=k8s.io Jun 26 07:19:50.808086 containerd[1466]: time="2024-06-26T07:19:50.807478207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:50.844473 containerd[1466]: time="2024-06-26T07:19:50.844403948Z" level=info msg="TearDown network for sandbox \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" successfully" Jun 26 07:19:50.844473 containerd[1466]: time="2024-06-26T07:19:50.844457905Z" level=info msg="StopPodSandbox for \"479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa\" returns successfully" Jun 26 07:19:50.855031 containerd[1466]: time="2024-06-26T07:19:50.854956614Z" level=info msg="TearDown network for sandbox \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\" successfully" Jun 26 07:19:50.855802 containerd[1466]: time="2024-06-26T07:19:50.855273789Z" level=info msg="StopPodSandbox for \"9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b\" returns successfully" Jun 26 07:19:50.987285 kubelet[2610]: I0626 07:19:50.987101 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bcdf01-5fdb-43df-99f8-e47854022908-clustermesh-secrets\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:50.987285 kubelet[2610]: I0626 07:19:50.987205 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cni-path\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.001759 kubelet[2610]: I0626 07:19:51.001017 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rx8l\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-kube-api-access-2rx8l\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.001759 kubelet[2610]: I0626 07:19:51.001683 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-kernel\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.001759 kubelet[2610]: I0626 07:19:51.001756 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-xtables-lock\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.001801 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-run\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.001840 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-hostproc\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.001895 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-hubble-tls\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.001939 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tc8\" (UniqueName: \"kubernetes.io/projected/a2d78978-79ab-4b06-9d9b-3c67bc431161-kube-api-access-g5tc8\") pod \"a2d78978-79ab-4b06-9d9b-3c67bc431161\" (UID: \"a2d78978-79ab-4b06-9d9b-3c67bc431161\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.001983 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-lib-modules\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002143 kubelet[2610]: I0626 07:19:51.002066 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d78978-79ab-4b06-9d9b-3c67bc431161-cilium-config-path\") pod \"a2d78978-79ab-4b06-9d9b-3c67bc431161\" (UID: \"a2d78978-79ab-4b06-9d9b-3c67bc431161\") " Jun 26 07:19:51.002491 kubelet[2610]: I0626 07:19:51.002102 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-bpf-maps\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002491 kubelet[2610]: I0626 07:19:51.002147 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-net\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002491 kubelet[2610]: I0626 07:19:51.002184 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-cgroup\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002491 kubelet[2610]: I0626 07:19:51.002231 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-etc-cni-netd\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.002491 kubelet[2610]: I0626 07:19:51.002271 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-config-path\") pod \"36bcdf01-5fdb-43df-99f8-e47854022908\" (UID: \"36bcdf01-5fdb-43df-99f8-e47854022908\") " Jun 26 07:19:51.039557 kubelet[2610]: I0626 07:19:51.038285 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.039557 kubelet[2610]: I0626 07:19:51.038432 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.039557 kubelet[2610]: I0626 07:19:51.038475 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-hostproc" (OuterVolumeSpecName: "hostproc") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.039861 kubelet[2610]: I0626 07:19:51.039591 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cni-path" (OuterVolumeSpecName: "cni-path") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.039861 kubelet[2610]: I0626 07:19:51.013936 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.039861 kubelet[2610]: I0626 07:19:51.018912 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:19:51.039861 kubelet[2610]: I0626 07:19:51.039785 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.045095 kubelet[2610]: I0626 07:19:51.044845 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.045351 kubelet[2610]: I0626 07:19:51.045143 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bcdf01-5fdb-43df-99f8-e47854022908-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 26 07:19:51.045351 kubelet[2610]: I0626 07:19:51.045226 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.045351 kubelet[2610]: I0626 07:19:51.045265 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.045351 kubelet[2610]: I0626 07:19:51.045311 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 26 07:19:51.047686 kubelet[2610]: I0626 07:19:51.047216 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d78978-79ab-4b06-9d9b-3c67bc431161-kube-api-access-g5tc8" (OuterVolumeSpecName: "kube-api-access-g5tc8") pod "a2d78978-79ab-4b06-9d9b-3c67bc431161" (UID: "a2d78978-79ab-4b06-9d9b-3c67bc431161"). InnerVolumeSpecName "kube-api-access-g5tc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:19:51.047686 kubelet[2610]: I0626 07:19:51.047435 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-kube-api-access-2rx8l" (OuterVolumeSpecName: "kube-api-access-2rx8l") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "kube-api-access-2rx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:19:51.050515 kubelet[2610]: I0626 07:19:51.050406 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "36bcdf01-5fdb-43df-99f8-e47854022908" (UID: "36bcdf01-5fdb-43df-99f8-e47854022908"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 26 07:19:51.050927 kubelet[2610]: I0626 07:19:51.050679 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d78978-79ab-4b06-9d9b-3c67bc431161-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2d78978-79ab-4b06-9d9b-3c67bc431161" (UID: "a2d78978-79ab-4b06-9d9b-3c67bc431161"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 26 07:19:51.102971 kubelet[2610]: I0626 07:19:51.102665 2610 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-hostproc\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.102971 kubelet[2610]: I0626 07:19:51.102967 2610 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-hubble-tls\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.102998 2610 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g5tc8\" (UniqueName: \"kubernetes.io/projected/a2d78978-79ab-4b06-9d9b-3c67bc431161-kube-api-access-g5tc8\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103017 2610 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-lib-modules\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103085 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d78978-79ab-4b06-9d9b-3c67bc431161-cilium-config-path\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103111 2610 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-net\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103127 2610 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-bpf-maps\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103149 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-config-path\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103170 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-cgroup\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103349 kubelet[2610]: I0626 07:19:51.103185 2610 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-etc-cni-netd\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103202 2610 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bcdf01-5fdb-43df-99f8-e47854022908-clustermesh-secrets\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103216 2610 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cni-path\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103232 2610 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2rx8l\" (UniqueName: \"kubernetes.io/projected/36bcdf01-5fdb-43df-99f8-e47854022908-kube-api-access-2rx8l\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103249 2610 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-host-proc-sys-kernel\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103265 2610 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-xtables-lock\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.103848 kubelet[2610]: I0626 07:19:51.103283 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bcdf01-5fdb-43df-99f8-e47854022908-cilium-run\") on node \"ci-4012.0.0-0-d66a9e5a9c\" DevicePath \"\"" Jun 26 07:19:51.365306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c566d25a05f3c7295c9bd7373bbfda6088219c3829975b49c7d9961ffffb32b-rootfs.mount: Deactivated successfully. Jun 26 07:19:51.365651 systemd[1]: var-lib-kubelet-pods-a2d78978\x2d79ab\x2d4b06\x2d9d9b\x2d3c67bc431161-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5tc8.mount: Deactivated successfully. Jun 26 07:19:51.365865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479472f1c4daa6996bb6a6c5c5cd832ed5ce759cad1100c2e3bf528b92687eaa-rootfs.mount: Deactivated successfully. Jun 26 07:19:51.365971 systemd[1]: var-lib-kubelet-pods-36bcdf01\x2d5fdb\x2d43df\x2d99f8\x2de47854022908-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rx8l.mount: Deactivated successfully. Jun 26 07:19:51.366093 systemd[1]: var-lib-kubelet-pods-36bcdf01\x2d5fdb\x2d43df\x2d99f8\x2de47854022908-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 26 07:19:51.366197 systemd[1]: var-lib-kubelet-pods-36bcdf01\x2d5fdb\x2d43df\x2d99f8\x2de47854022908-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 26 07:19:51.589553 systemd[1]: Removed slice kubepods-burstable-pod36bcdf01_5fdb_43df_99f8_e47854022908.slice - libcontainer container kubepods-burstable-pod36bcdf01_5fdb_43df_99f8_e47854022908.slice. Jun 26 07:19:51.589728 systemd[1]: kubepods-burstable-pod36bcdf01_5fdb_43df_99f8_e47854022908.slice: Consumed 15.231s CPU time. Jun 26 07:19:51.595273 systemd[1]: Removed slice kubepods-besteffort-poda2d78978_79ab_4b06_9d9b_3c67bc431161.slice - libcontainer container kubepods-besteffort-poda2d78978_79ab_4b06_9d9b_3c67bc431161.slice. Jun 26 07:19:51.839595 kubelet[2610]: I0626 07:19:51.839452 2610 scope.go:117] "RemoveContainer" containerID="b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f" Jun 26 07:19:51.846536 containerd[1466]: time="2024-06-26T07:19:51.846436685Z" level=info msg="RemoveContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\"" Jun 26 07:19:51.876705 containerd[1466]: time="2024-06-26T07:19:51.875965318Z" level=info msg="RemoveContainer for \"b246a0bc40a054a4706a87860361ef857451faca73b9c3fa23ac45cb7894d16f\" returns successfully" Jun 26 07:19:51.887896 kubelet[2610]: I0626 07:19:51.887196 2610 scope.go:117] "RemoveContainer" containerID="b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535" Jun 26 07:19:51.919886 containerd[1466]: time="2024-06-26T07:19:51.919312118Z" level=info msg="RemoveContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\"" Jun 26 07:19:51.932079 containerd[1466]: time="2024-06-26T07:19:51.931901209Z" level=info msg="RemoveContainer for \"b729927a81b9ac48b1b56f4b7f3a2110bcb585d737fcbad10fc7ed46a1661535\" returns successfully" Jun 26 07:19:51.933099 kubelet[2610]: I0626 07:19:51.932568 2610 scope.go:117] "RemoveContainer" containerID="5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df" Jun 26 07:19:51.941476 containerd[1466]: time="2024-06-26T07:19:51.939020747Z" level=info msg="RemoveContainer for \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\"" Jun 26 07:19:51.955373 containerd[1466]: time="2024-06-26T07:19:51.955252049Z" level=info msg="RemoveContainer for \"5c96171fa27932d04cf24767f01008eb4f2e69ee8700c39a8415161396c1b1df\" returns successfully" Jun 26 07:19:51.956125 kubelet[2610]: I0626 07:19:51.955793 2610 scope.go:117] "RemoveContainer" containerID="f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e" Jun 26 07:19:51.958024 containerd[1466]: time="2024-06-26T07:19:51.957972187Z" level=info msg="RemoveContainer for \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\"" Jun 26 07:19:51.965855 containerd[1466]: time="2024-06-26T07:19:51.965749966Z" level=info msg="RemoveContainer for \"f8f2703e0f9cdf4c2cedb4aedb24bff97a6e0b79d8134ddbddd975ddc728828e\" returns successfully" Jun 26 07:19:51.966355 kubelet[2610]: I0626 07:19:51.966311 2610 scope.go:117] "RemoveContainer" containerID="fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6" Jun 26 07:19:51.968741 containerd[1466]: time="2024-06-26T07:19:51.968691816Z" level=info msg="RemoveContainer for \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\"" Jun 26 07:19:51.981608 containerd[1466]: time="2024-06-26T07:19:51.981387196Z" level=info msg="RemoveContainer for \"fd0b1eb3218425d80214de70c144f37211b9e53f007354b5d0282a1773e202f6\" returns successfully" Jun 26 07:19:51.982258 kubelet[2610]: I0626 07:19:51.981911 2610 scope.go:117] "RemoveContainer" containerID="6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2" Jun 26 07:19:51.984382 containerd[1466]: time="2024-06-26T07:19:51.984329065Z" level=info msg="RemoveContainer for \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\"" Jun 26 07:19:51.994924 containerd[1466]: time="2024-06-26T07:19:51.994538723Z" level=info msg="RemoveContainer for \"6eefe756ca97f570630522322ce4eadb728e6530537ec39f115c2481468904a2\" returns successfully" Jun 26 07:19:52.160286 sshd[4259]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:52.171944 systemd[1]: sshd@30-146.190.154.167:22-147.75.109.163:57998.service: Deactivated successfully. Jun 26 07:19:52.176905 systemd[1]: session-29.scope: Deactivated successfully. Jun 26 07:19:52.177521 systemd[1]: session-29.scope: Consumed 1.113s CPU time. Jun 26 07:19:52.181535 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Jun 26 07:19:52.191194 systemd[1]: Started sshd@31-146.190.154.167:22-147.75.109.163:58000.service - OpenSSH per-connection server daemon (147.75.109.163:58000). Jun 26 07:19:52.194420 systemd-logind[1448]: Removed session 29. Jun 26 07:19:52.299978 sshd[4424]: Accepted publickey for core from 147.75.109.163 port 58000 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:52.303296 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:52.312636 systemd-logind[1448]: New session 30 of user core. Jun 26 07:19:52.324879 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 26 07:19:53.521387 sshd[4424]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:53.536710 systemd[1]: sshd@31-146.190.154.167:22-147.75.109.163:58000.service: Deactivated successfully. Jun 26 07:19:53.544606 systemd[1]: session-30.scope: Deactivated successfully. Jun 26 07:19:53.553511 systemd-logind[1448]: Session 30 logged out. Waiting for processes to exit. Jun 26 07:19:53.571247 systemd[1]: Started sshd@32-146.190.154.167:22-147.75.109.163:58010.service - OpenSSH per-connection server daemon (147.75.109.163:58010). Jun 26 07:19:53.578287 systemd-logind[1448]: Removed session 30. Jun 26 07:19:53.580783 kubelet[2610]: I0626 07:19:53.580725 2610 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" path="/var/lib/kubelet/pods/36bcdf01-5fdb-43df-99f8-e47854022908/volumes" Jun 26 07:19:53.583223 kubelet[2610]: I0626 07:19:53.581951 2610 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2d78978-79ab-4b06-9d9b-3c67bc431161" path="/var/lib/kubelet/pods/a2d78978-79ab-4b06-9d9b-3c67bc431161/volumes" Jun 26 07:19:53.598089 kubelet[2610]: I0626 07:19:53.596089 2610 topology_manager.go:215] "Topology Admit Handler" podUID="e5769b64-0078-4934-9655-09c9fc345056" podNamespace="kube-system" podName="cilium-d9kd2" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601726 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="clean-cilium-state" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601811 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="mount-cgroup" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601828 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="apply-sysctl-overwrites" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601861 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="mount-bpf-fs" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601881 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2d78978-79ab-4b06-9d9b-3c67bc431161" containerName="cilium-operator" Jun 26 07:19:53.602087 kubelet[2610]: E0626 07:19:53.601898 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="cilium-agent" Jun 26 07:19:53.602087 kubelet[2610]: I0626 07:19:53.601974 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d78978-79ab-4b06-9d9b-3c67bc431161" containerName="cilium-operator" Jun 26 07:19:53.602087 kubelet[2610]: I0626 07:19:53.601992 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bcdf01-5fdb-43df-99f8-e47854022908" containerName="cilium-agent" Jun 26 07:19:53.718109 sshd[4436]: Accepted publickey for core from 147.75.109.163 port 58010 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:53.722817 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:53.765311 systemd-logind[1448]: New session 31 of user core. Jun 26 07:19:53.770458 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 26 07:19:53.771949 systemd[1]: Created slice kubepods-burstable-pode5769b64_0078_4934_9655_09c9fc345056.slice - libcontainer container kubepods-burstable-pode5769b64_0078_4934_9655_09c9fc345056.slice. Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.836926 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-xtables-lock\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.837072 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5769b64-0078-4934-9655-09c9fc345056-cilium-config-path\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.837122 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-cilium-run\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.837156 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5769b64-0078-4934-9655-09c9fc345056-cilium-ipsec-secrets\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.837194 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-bpf-maps\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.839663 kubelet[2610]: I0626 07:19:53.837227 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-cilium-cgroup\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837269 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-etc-cni-netd\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837299 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-host-proc-sys-kernel\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837329 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5769b64-0078-4934-9655-09c9fc345056-hubble-tls\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837362 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-lib-modules\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837397 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-host-proc-sys-net\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840195 kubelet[2610]: I0626 07:19:53.837436 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcnh5\" (UniqueName: \"kubernetes.io/projected/e5769b64-0078-4934-9655-09c9fc345056-kube-api-access-dcnh5\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840607 kubelet[2610]: I0626 07:19:53.837503 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-cni-path\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840607 kubelet[2610]: I0626 07:19:53.837547 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5769b64-0078-4934-9655-09c9fc345056-hostproc\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.840607 kubelet[2610]: I0626 07:19:53.837579 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5769b64-0078-4934-9655-09c9fc345056-clustermesh-secrets\") pod \"cilium-d9kd2\" (UID: \"e5769b64-0078-4934-9655-09c9fc345056\") " pod="kube-system/cilium-d9kd2" Jun 26 07:19:53.871501 sshd[4436]: pam_unix(sshd:session): session closed for user core Jun 26 07:19:53.892875 systemd[1]: sshd@32-146.190.154.167:22-147.75.109.163:58010.service: Deactivated successfully. Jun 26 07:19:53.903929 systemd[1]: session-31.scope: Deactivated successfully. Jun 26 07:19:53.919703 systemd-logind[1448]: Session 31 logged out. Waiting for processes to exit. Jun 26 07:19:53.937619 systemd[1]: Started sshd@33-146.190.154.167:22-147.75.109.163:58012.service - OpenSSH per-connection server daemon (147.75.109.163:58012). Jun 26 07:19:53.941970 systemd-logind[1448]: Removed session 31. Jun 26 07:19:54.032751 systemd[1]: Started sshd@34-146.190.154.167:22-184.168.122.184:38970.service - OpenSSH per-connection server daemon (184.168.122.184:38970). Jun 26 07:19:54.097947 kubelet[2610]: E0626 07:19:54.097778 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:54.099958 containerd[1466]: time="2024-06-26T07:19:54.099867069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9kd2,Uid:e5769b64-0078-4934-9655-09c9fc345056,Namespace:kube-system,Attempt:0,}" Jun 26 07:19:54.120659 sshd[4445]: Accepted publickey for core from 147.75.109.163 port 58012 ssh2: RSA SHA256:WAbSU2evOMlkRubBjAYMH3yLpljmUfJl2SAjNYWQOFc Jun 26 07:19:54.125656 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 26 07:19:54.157011 systemd-logind[1448]: New session 32 of user core. Jun 26 07:19:54.160386 systemd[1]: Started session-32.scope - Session 32 of User core. Jun 26 07:19:54.193099 containerd[1466]: time="2024-06-26T07:19:54.192856159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 26 07:19:54.193099 containerd[1466]: time="2024-06-26T07:19:54.192946382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:54.193099 containerd[1466]: time="2024-06-26T07:19:54.192980071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 26 07:19:54.193099 containerd[1466]: time="2024-06-26T07:19:54.193012943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 26 07:19:54.233940 systemd[1]: Started cri-containerd-044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617.scope - libcontainer container 044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617. Jun 26 07:19:54.306375 containerd[1466]: time="2024-06-26T07:19:54.306198315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9kd2,Uid:e5769b64-0078-4934-9655-09c9fc345056,Namespace:kube-system,Attempt:0,} returns sandbox id \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\"" Jun 26 07:19:54.311902 kubelet[2610]: E0626 07:19:54.311627 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:54.328877 containerd[1466]: time="2024-06-26T07:19:54.328530237Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 26 07:19:54.376522 containerd[1466]: time="2024-06-26T07:19:54.374418180Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca\"" Jun 26 07:19:54.378207 containerd[1466]: time="2024-06-26T07:19:54.377381564Z" level=info msg="StartContainer for \"815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca\"" Jun 26 07:19:54.453387 systemd[1]: Started cri-containerd-815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca.scope - libcontainer container 815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca. Jun 26 07:19:54.536936 containerd[1466]: time="2024-06-26T07:19:54.536822187Z" level=info msg="StartContainer for \"815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca\" returns successfully" Jun 26 07:19:54.567667 systemd[1]: cri-containerd-815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca.scope: Deactivated successfully. Jun 26 07:19:54.641244 containerd[1466]: time="2024-06-26T07:19:54.640710103Z" level=info msg="shim disconnected" id=815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca namespace=k8s.io Jun 26 07:19:54.641244 containerd[1466]: time="2024-06-26T07:19:54.640803668Z" level=warning msg="cleaning up after shim disconnected" id=815e8fce43be1689227706d900fb89a7bb8b0016a840286a47dc57d77e1bf8ca namespace=k8s.io Jun 26 07:19:54.641244 containerd[1466]: time="2024-06-26T07:19:54.640818879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:54.878420 kubelet[2610]: E0626 07:19:54.878371 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:54.886061 containerd[1466]: time="2024-06-26T07:19:54.884771994Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 26 07:19:54.904311 kubelet[2610]: E0626 07:19:54.903671 2610 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 26 07:19:54.936123 containerd[1466]: time="2024-06-26T07:19:54.935947339Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7\"" Jun 26 07:19:54.940571 containerd[1466]: time="2024-06-26T07:19:54.938356466Z" level=info msg="StartContainer for \"56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7\"" Jun 26 07:19:54.996216 systemd[1]: Started cri-containerd-56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7.scope - libcontainer container 56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7. Jun 26 07:19:55.081115 containerd[1466]: time="2024-06-26T07:19:55.080986798Z" level=info msg="StartContainer for \"56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7\" returns successfully" Jun 26 07:19:55.096939 systemd[1]: cri-containerd-56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7.scope: Deactivated successfully. Jun 26 07:19:55.160335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7-rootfs.mount: Deactivated successfully. Jun 26 07:19:55.171532 containerd[1466]: time="2024-06-26T07:19:55.171086588Z" level=info msg="shim disconnected" id=56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7 namespace=k8s.io Jun 26 07:19:55.171532 containerd[1466]: time="2024-06-26T07:19:55.171176020Z" level=warning msg="cleaning up after shim disconnected" id=56ced6569f618ae1e916dd1b5908283375e69dd4b3127b742feeeefee4120bc7 namespace=k8s.io Jun 26 07:19:55.171532 containerd[1466]: time="2024-06-26T07:19:55.171195043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:55.227857 sshd[4452]: Received disconnect from 184.168.122.184 port 38970:11: Bye Bye [preauth] Jun 26 07:19:55.227857 sshd[4452]: Disconnected from authenticating user root 184.168.122.184 port 38970 [preauth] Jun 26 07:19:55.232640 systemd[1]: sshd@34-146.190.154.167:22-184.168.122.184:38970.service: Deactivated successfully. Jun 26 07:19:55.888085 kubelet[2610]: E0626 07:19:55.886599 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:55.894141 containerd[1466]: time="2024-06-26T07:19:55.893801527Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 26 07:19:55.953652 containerd[1466]: time="2024-06-26T07:19:55.950465800Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822\"" Jun 26 07:19:55.953652 containerd[1466]: time="2024-06-26T07:19:55.952363640Z" level=info msg="StartContainer for \"db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822\"" Jun 26 07:19:56.027551 systemd[1]: Started cri-containerd-db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822.scope - libcontainer container db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822. Jun 26 07:19:56.089176 containerd[1466]: time="2024-06-26T07:19:56.089115239Z" level=info msg="StartContainer for \"db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822\" returns successfully" Jun 26 07:19:56.099662 systemd[1]: cri-containerd-db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822.scope: Deactivated successfully. Jun 26 07:19:56.141545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822-rootfs.mount: Deactivated successfully. Jun 26 07:19:56.154555 containerd[1466]: time="2024-06-26T07:19:56.154138759Z" level=info msg="shim disconnected" id=db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822 namespace=k8s.io Jun 26 07:19:56.154555 containerd[1466]: time="2024-06-26T07:19:56.154258839Z" level=warning msg="cleaning up after shim disconnected" id=db08f508b08f1bbe3df5ecb31640f7d843bc4f9dcccc700a259fafe90a4e7822 namespace=k8s.io Jun 26 07:19:56.154555 containerd[1466]: time="2024-06-26T07:19:56.154274826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:56.894572 kubelet[2610]: E0626 07:19:56.894456 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:56.909628 containerd[1466]: time="2024-06-26T07:19:56.909027716Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 26 07:19:56.977165 containerd[1466]: time="2024-06-26T07:19:56.977005364Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766\"" Jun 26 07:19:56.979295 containerd[1466]: time="2024-06-26T07:19:56.978496623Z" level=info msg="StartContainer for \"22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766\"" Jun 26 07:19:57.052499 systemd[1]: run-containerd-runc-k8s.io-22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766-runc.tBl2Qc.mount: Deactivated successfully. Jun 26 07:19:57.073588 systemd[1]: Started cri-containerd-22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766.scope - libcontainer container 22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766. Jun 26 07:19:57.160179 systemd[1]: cri-containerd-22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766.scope: Deactivated successfully. Jun 26 07:19:57.170614 containerd[1466]: time="2024-06-26T07:19:57.164238773Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5769b64_0078_4934_9655_09c9fc345056.slice/cri-containerd-22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766.scope/memory.events\": no such file or directory" Jun 26 07:19:57.174079 containerd[1466]: time="2024-06-26T07:19:57.173956155Z" level=info msg="StartContainer for \"22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766\" returns successfully" Jun 26 07:19:57.221526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766-rootfs.mount: Deactivated successfully. Jun 26 07:19:57.226149 containerd[1466]: time="2024-06-26T07:19:57.225823959Z" level=info msg="shim disconnected" id=22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766 namespace=k8s.io Jun 26 07:19:57.226149 containerd[1466]: time="2024-06-26T07:19:57.225896438Z" level=warning msg="cleaning up after shim disconnected" id=22aa377e99859c1f01a6486330eeaad01d39e9c232b86507898840affa82f766 namespace=k8s.io Jun 26 07:19:57.226149 containerd[1466]: time="2024-06-26T07:19:57.225907900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 26 07:19:57.902531 kubelet[2610]: E0626 07:19:57.902482 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:57.909405 containerd[1466]: time="2024-06-26T07:19:57.908555425Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 26 07:19:57.962922 containerd[1466]: time="2024-06-26T07:19:57.962164836Z" level=info msg="CreateContainer within sandbox \"044ea3a0828dafc60799914a051d92f0ab335d590474ec68407721e802702617\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85\"" Jun 26 07:19:57.966003 containerd[1466]: time="2024-06-26T07:19:57.964300435Z" level=info msg="StartContainer for \"9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85\"" Jun 26 07:19:58.014402 systemd[1]: Started cri-containerd-9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85.scope - libcontainer container 9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85. Jun 26 07:19:58.083133 containerd[1466]: time="2024-06-26T07:19:58.082875508Z" level=info msg="StartContainer for \"9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85\" returns successfully" Jun 26 07:19:58.580726 kubelet[2610]: E0626 07:19:58.580256 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-hwn29" podUID="84be4be2-9405-41f0-9cfe-b97c7e3eef6e" Jun 26 07:19:58.882100 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 26 07:19:58.936124 kubelet[2610]: E0626 07:19:58.936061 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:19:58.994009 kubelet[2610]: I0626 07:19:58.991955 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d9kd2" podStartSLOduration=5.991706617 podStartE2EDuration="5.991706617s" podCreationTimestamp="2024-06-26 07:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-26 07:19:58.991436501 +0000 UTC m=+139.968539992" watchObservedRunningTime="2024-06-26 07:19:58.991706617 +0000 UTC m=+139.968810092" Jun 26 07:20:00.102891 kubelet[2610]: E0626 07:20:00.100873 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:00.563777 kubelet[2610]: E0626 07:20:00.561842 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:00.566867 kubelet[2610]: E0626 07:20:00.566571 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:07.582669 systemd-networkd[1373]: lxc_health: Link UP Jun 26 07:20:07.688927 systemd-networkd[1373]: lxc_health: Gained carrier Jun 26 07:20:08.105701 systemd[1]: run-containerd-runc-k8s.io-9f979a5c6fe55b68b772646e879617bca2e9c25ffc71ff0cc1440f2c1bfbca85-runc.GlXdNr.mount: Deactivated successfully. Jun 26 07:20:08.117820 kubelet[2610]: E0626 07:20:08.116390 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:09.017256 kubelet[2610]: E0626 07:20:09.016333 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:09.158245 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jun 26 07:20:10.021418 kubelet[2610]: E0626 07:20:10.021032 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jun 26 07:20:13.092247 sshd[4445]: pam_unix(sshd:session): session closed for user core Jun 26 07:20:13.107283 systemd[1]: sshd@33-146.190.154.167:22-147.75.109.163:58012.service: Deactivated successfully. Jun 26 07:20:13.122683 systemd[1]: session-32.scope: Deactivated successfully. Jun 26 07:20:13.126713 systemd-logind[1448]: Session 32 logged out. Waiting for processes to exit. Jun 26 07:20:13.134925 systemd-logind[1448]: Removed session 32.