Aug 6 07:50:13.109159 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 6 07:50:13.109206 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:50:13.109230 kernel: BIOS-provided physical RAM map: Aug 6 07:50:13.109245 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 6 07:50:13.109260 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 6 07:50:13.109275 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 6 07:50:13.109294 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 6 07:50:13.109310 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 6 07:50:13.109326 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 6 07:50:13.109362 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 6 07:50:13.109379 kernel: NX (Execute Disable) protection: active Aug 6 07:50:13.109395 kernel: APIC: Static calls initialized Aug 6 07:50:13.109411 kernel: SMBIOS 2.8 present. Aug 6 07:50:13.109428 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 6 07:50:13.109449 kernel: Hypervisor detected: KVM Aug 6 07:50:13.109471 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 6 07:50:13.109489 kernel: kvm-clock: using sched offset of 4159238899 cycles Aug 6 07:50:13.109511 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 6 07:50:13.109561 kernel: tsc: Detected 2294.608 MHz processor Aug 6 07:50:13.109579 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 6 07:50:13.109601 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 6 07:50:13.109620 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 6 07:50:13.109638 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 6 07:50:13.109656 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 6 07:50:13.109864 kernel: ACPI: Early table checksum verification disabled Aug 6 07:50:13.109877 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 6 07:50:13.109888 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109901 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109930 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109942 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 6 07:50:13.109954 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109966 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109977 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.109997 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 07:50:13.110018 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 6 07:50:13.110040 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 6 07:50:13.110063 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 6 07:50:13.110081 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 6 07:50:13.110100 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 6 07:50:13.110118 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 6 07:50:13.110146 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 6 07:50:13.110169 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 6 07:50:13.110189 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 6 07:50:13.110214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 6 07:50:13.110239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 6 07:50:13.110264 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 6 07:50:13.110289 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 6 07:50:13.110314 kernel: Zone ranges: Aug 6 07:50:13.110334 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 6 07:50:13.110354 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 6 07:50:13.110373 kernel: Normal empty Aug 6 07:50:13.110394 kernel: Movable zone start for each node Aug 6 07:50:13.110418 kernel: Early memory node ranges Aug 6 07:50:13.110443 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 6 07:50:13.110466 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 6 07:50:13.110486 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 6 07:50:13.110506 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 6 07:50:13.110520 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 6 07:50:13.110613 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 6 07:50:13.110634 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 6 07:50:13.110655 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 6 07:50:13.110670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 6 07:50:13.110688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 6 07:50:13.110712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 6 07:50:13.110726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 6 07:50:13.110747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 6 07:50:13.110774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 6 07:50:13.110797 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 6 07:50:13.110817 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 6 07:50:13.110840 kernel: TSC deadline timer available Aug 6 07:50:13.110857 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 6 07:50:13.110876 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 6 07:50:13.110899 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 6 07:50:13.110926 kernel: Booting paravirtualized kernel on KVM Aug 6 07:50:13.110953 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 6 07:50:13.110967 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 6 07:50:13.110981 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 6 07:50:13.110997 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 6 07:50:13.111013 kernel: pcpu-alloc: [0] 0 1 Aug 6 07:50:13.111028 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 6 07:50:13.111048 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:50:13.111063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 6 07:50:13.111083 kernel: random: crng init done Aug 6 07:50:13.111098 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 6 07:50:13.111119 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 6 07:50:13.111139 kernel: Fallback order for Node 0: 0 Aug 6 07:50:13.111159 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 6 07:50:13.111178 kernel: Policy zone: DMA32 Aug 6 07:50:13.111198 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 6 07:50:13.111218 kernel: Memory: 1965060K/2096612K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 131292K reserved, 0K cma-reserved) Aug 6 07:50:13.111238 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 6 07:50:13.111262 kernel: Kernel/User page tables isolation: enabled Aug 6 07:50:13.111282 kernel: ftrace: allocating 37659 entries in 148 pages Aug 6 07:50:13.111304 kernel: ftrace: allocated 148 pages with 3 groups Aug 6 07:50:13.111324 kernel: Dynamic Preempt: voluntary Aug 6 07:50:13.111344 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 6 07:50:13.111365 kernel: rcu: RCU event tracing is enabled. Aug 6 07:50:13.111385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 6 07:50:13.111407 kernel: Trampoline variant of Tasks RCU enabled. Aug 6 07:50:13.111427 kernel: Rude variant of Tasks RCU enabled. Aug 6 07:50:13.111450 kernel: Tracing variant of Tasks RCU enabled. Aug 6 07:50:13.111470 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 6 07:50:13.111489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 6 07:50:13.111509 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 6 07:50:13.111552 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 6 07:50:13.111583 kernel: Console: colour VGA+ 80x25 Aug 6 07:50:13.111603 kernel: printk: console [tty0] enabled Aug 6 07:50:13.111622 kernel: printk: console [ttyS0] enabled Aug 6 07:50:13.111642 kernel: ACPI: Core revision 20230628 Aug 6 07:50:13.111666 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 6 07:50:13.112389 kernel: APIC: Switch to symmetric I/O mode setup Aug 6 07:50:13.112414 kernel: x2apic enabled Aug 6 07:50:13.112434 kernel: APIC: Switched APIC routing to: physical x2apic Aug 6 07:50:13.112460 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 6 07:50:13.112481 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Aug 6 07:50:13.112501 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Aug 6 07:50:13.112521 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 6 07:50:13.112564 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 6 07:50:13.112604 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 6 07:50:13.112626 kernel: Spectre V2 : Mitigation: Retpolines Aug 6 07:50:13.112647 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 6 07:50:13.112672 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 6 07:50:13.112694 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 6 07:50:13.112715 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 6 07:50:13.112736 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 6 07:50:13.112757 kernel: MDS: Mitigation: Clear CPU buffers Aug 6 07:50:13.112779 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 6 07:50:13.112807 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 6 07:50:13.112828 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 6 07:50:13.112850 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 6 07:50:13.112872 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 6 07:50:13.112893 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 6 07:50:13.112914 kernel: Freeing SMP alternatives memory: 32K Aug 6 07:50:13.112944 kernel: pid_max: default: 32768 minimum: 301 Aug 6 07:50:13.112959 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 6 07:50:13.112978 kernel: SELinux: Initializing. Aug 6 07:50:13.112992 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:50:13.113008 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 07:50:13.113023 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 6 07:50:13.113036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:50:13.113049 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:50:13.113065 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 6 07:50:13.113079 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 6 07:50:13.113100 kernel: signal: max sigframe size: 1776 Aug 6 07:50:13.113116 kernel: rcu: Hierarchical SRCU implementation. Aug 6 07:50:13.113133 kernel: rcu: Max phase no-delay instances is 400. Aug 6 07:50:13.113145 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 6 07:50:13.113158 kernel: smp: Bringing up secondary CPUs ... Aug 6 07:50:13.113172 kernel: smpboot: x86: Booting SMP configuration: Aug 6 07:50:13.113184 kernel: .... node #0, CPUs: #1 Aug 6 07:50:13.113197 kernel: smp: Brought up 1 node, 2 CPUs Aug 6 07:50:13.113211 kernel: smpboot: Max logical packages: 1 Aug 6 07:50:13.113224 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Aug 6 07:50:13.113242 kernel: devtmpfs: initialized Aug 6 07:50:13.113257 kernel: x86/mm: Memory block size: 128MB Aug 6 07:50:13.113272 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 6 07:50:13.113287 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 6 07:50:13.113303 kernel: pinctrl core: initialized pinctrl subsystem Aug 6 07:50:13.113328 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 6 07:50:13.113344 kernel: audit: initializing netlink subsys (disabled) Aug 6 07:50:13.113374 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 6 07:50:13.113403 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 6 07:50:13.113429 kernel: audit: type=2000 audit(1722930611.756:1): state=initialized audit_enabled=0 res=1 Aug 6 07:50:13.113450 kernel: cpuidle: using governor menu Aug 6 07:50:13.113472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 6 07:50:13.113493 kernel: dca service started, version 1.12.1 Aug 6 07:50:13.113514 kernel: PCI: Using configuration type 1 for base access Aug 6 07:50:13.113549 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 6 07:50:13.113574 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 6 07:50:13.113602 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 6 07:50:13.113623 kernel: ACPI: Added _OSI(Module Device) Aug 6 07:50:13.113649 kernel: ACPI: Added _OSI(Processor Device) Aug 6 07:50:13.113687 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 6 07:50:13.113712 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 6 07:50:13.113737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 6 07:50:13.113763 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 6 07:50:13.113788 kernel: ACPI: Interpreter enabled Aug 6 07:50:13.113814 kernel: ACPI: PM: (supports S0 S5) Aug 6 07:50:13.113836 kernel: ACPI: Using IOAPIC for interrupt routing Aug 6 07:50:13.113854 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 6 07:50:13.113878 kernel: PCI: Using E820 reservations for host bridge windows Aug 6 07:50:13.113901 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 6 07:50:13.113923 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 6 07:50:13.114236 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 6 07:50:13.114400 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 6 07:50:13.114792 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 6 07:50:13.114825 kernel: acpiphp: Slot [3] registered Aug 6 07:50:13.114855 kernel: acpiphp: Slot [4] registered Aug 6 07:50:13.114876 kernel: acpiphp: Slot [5] registered Aug 6 07:50:13.114898 kernel: acpiphp: Slot [6] registered Aug 6 07:50:13.114920 kernel: acpiphp: Slot [7] registered Aug 6 07:50:13.114939 kernel: acpiphp: Slot [8] registered Aug 6 07:50:13.114963 kernel: acpiphp: Slot [9] registered Aug 6 07:50:13.114987 kernel: acpiphp: Slot [10] registered Aug 6 07:50:13.115008 kernel: acpiphp: Slot [11] registered Aug 6 07:50:13.115029 kernel: acpiphp: Slot [12] registered Aug 6 07:50:13.115055 kernel: acpiphp: Slot [13] registered Aug 6 07:50:13.115076 kernel: acpiphp: Slot [14] registered Aug 6 07:50:13.115097 kernel: acpiphp: Slot [15] registered Aug 6 07:50:13.115118 kernel: acpiphp: Slot [16] registered Aug 6 07:50:13.115142 kernel: acpiphp: Slot [17] registered Aug 6 07:50:13.115164 kernel: acpiphp: Slot [18] registered Aug 6 07:50:13.115185 kernel: acpiphp: Slot [19] registered Aug 6 07:50:13.115207 kernel: acpiphp: Slot [20] registered Aug 6 07:50:13.115228 kernel: acpiphp: Slot [21] registered Aug 6 07:50:13.115249 kernel: acpiphp: Slot [22] registered Aug 6 07:50:13.115274 kernel: acpiphp: Slot [23] registered Aug 6 07:50:13.115295 kernel: acpiphp: Slot [24] registered Aug 6 07:50:13.115318 kernel: acpiphp: Slot [25] registered Aug 6 07:50:13.115340 kernel: acpiphp: Slot [26] registered Aug 6 07:50:13.115364 kernel: acpiphp: Slot [27] registered Aug 6 07:50:13.115385 kernel: acpiphp: Slot [28] registered Aug 6 07:50:13.115406 kernel: acpiphp: Slot [29] registered Aug 6 07:50:13.115432 kernel: acpiphp: Slot [30] registered Aug 6 07:50:13.115456 kernel: acpiphp: Slot [31] registered Aug 6 07:50:13.115481 kernel: PCI host bridge to bus 0000:00 Aug 6 07:50:13.115751 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 6 07:50:13.115901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 6 07:50:13.116033 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 6 07:50:13.116167 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 6 07:50:13.116316 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 6 07:50:13.116466 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 6 07:50:13.116706 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 6 07:50:13.116895 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 6 07:50:13.117075 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 6 07:50:13.117225 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 6 07:50:13.117407 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 6 07:50:13.117611 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 6 07:50:13.117858 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 6 07:50:13.118040 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 6 07:50:13.118235 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 6 07:50:13.118393 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 6 07:50:13.121616 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 6 07:50:13.121909 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 6 07:50:13.122092 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 6 07:50:13.122297 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 6 07:50:13.122470 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 6 07:50:13.122654 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 6 07:50:13.122811 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 6 07:50:13.122986 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 6 07:50:13.123141 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 6 07:50:13.123325 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:50:13.123479 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 6 07:50:13.124428 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 6 07:50:13.127296 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 6 07:50:13.127626 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 6 07:50:13.127816 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 6 07:50:13.127968 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 6 07:50:13.128139 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 6 07:50:13.128325 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 6 07:50:13.128500 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 6 07:50:13.128785 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 6 07:50:13.128944 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 6 07:50:13.129148 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:50:13.129298 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 6 07:50:13.129489 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 6 07:50:13.129835 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 6 07:50:13.130035 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 6 07:50:13.130201 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 6 07:50:13.130374 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 6 07:50:13.131468 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 6 07:50:13.132222 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 6 07:50:13.132434 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 6 07:50:13.132630 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 6 07:50:13.132665 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 6 07:50:13.132687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 6 07:50:13.132709 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 6 07:50:13.132728 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 6 07:50:13.132743 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 6 07:50:13.132777 kernel: iommu: Default domain type: Translated Aug 6 07:50:13.132803 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 6 07:50:13.132824 kernel: PCI: Using ACPI for IRQ routing Aug 6 07:50:13.132846 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 6 07:50:13.132867 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 6 07:50:13.132888 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 6 07:50:13.133074 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 6 07:50:13.133238 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 6 07:50:13.133396 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 6 07:50:13.133429 kernel: vgaarb: loaded Aug 6 07:50:13.133451 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 6 07:50:13.133472 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 6 07:50:13.133493 kernel: clocksource: Switched to clocksource kvm-clock Aug 6 07:50:13.133514 kernel: VFS: Disk quotas dquot_6.6.0 Aug 6 07:50:13.135263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 6 07:50:13.135286 kernel: pnp: PnP ACPI init Aug 6 07:50:13.135300 kernel: pnp: PnP ACPI: found 4 devices Aug 6 07:50:13.135313 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 6 07:50:13.135347 kernel: NET: Registered PF_INET protocol family Aug 6 07:50:13.135366 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 6 07:50:13.135380 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 6 07:50:13.135393 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 6 07:50:13.135406 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 6 07:50:13.135419 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 6 07:50:13.135432 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 6 07:50:13.135445 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:50:13.135458 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 07:50:13.135477 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 6 07:50:13.135490 kernel: NET: Registered PF_XDP protocol family Aug 6 07:50:13.137484 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 6 07:50:13.137748 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 6 07:50:13.137913 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 6 07:50:13.138052 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 6 07:50:13.138193 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 6 07:50:13.138371 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 6 07:50:13.138614 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 6 07:50:13.138643 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 6 07:50:13.138803 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 43467 usecs Aug 6 07:50:13.138827 kernel: PCI: CLS 0 bytes, default 64 Aug 6 07:50:13.138846 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 6 07:50:13.138869 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Aug 6 07:50:13.138886 kernel: Initialise system trusted keyrings Aug 6 07:50:13.138900 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 6 07:50:13.138925 kernel: Key type asymmetric registered Aug 6 07:50:13.138938 kernel: Asymmetric key parser 'x509' registered Aug 6 07:50:13.138952 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 6 07:50:13.138966 kernel: io scheduler mq-deadline registered Aug 6 07:50:13.138980 kernel: io scheduler kyber registered Aug 6 07:50:13.138998 kernel: io scheduler bfq registered Aug 6 07:50:13.139014 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 6 07:50:13.139031 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 6 07:50:13.139046 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 6 07:50:13.139066 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 6 07:50:13.139079 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 6 07:50:13.139093 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 6 07:50:13.139107 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 6 07:50:13.139121 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 6 07:50:13.139135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 6 07:50:13.139358 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 6 07:50:13.139386 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 6 07:50:13.139547 kernel: rtc_cmos 00:03: registered as rtc0 Aug 6 07:50:13.139713 kernel: rtc_cmos 00:03: setting system clock to 2024-08-06T07:50:12 UTC (1722930612) Aug 6 07:50:13.139862 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 6 07:50:13.139883 kernel: intel_pstate: CPU model not supported Aug 6 07:50:13.139898 kernel: NET: Registered PF_INET6 protocol family Aug 6 07:50:13.139913 kernel: Segment Routing with IPv6 Aug 6 07:50:13.139927 kernel: In-situ OAM (IOAM) with IPv6 Aug 6 07:50:13.139942 kernel: NET: Registered PF_PACKET protocol family Aug 6 07:50:13.139956 kernel: Key type dns_resolver registered Aug 6 07:50:13.139979 kernel: IPI shorthand broadcast: enabled Aug 6 07:50:13.139993 kernel: sched_clock: Marking stable (1432002729, 178918092)->(1654255113, -43334292) Aug 6 07:50:13.140008 kernel: registered taskstats version 1 Aug 6 07:50:13.140023 kernel: Loading compiled-in X.509 certificates Aug 6 07:50:13.140036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 6 07:50:13.140049 kernel: Key type .fscrypt registered Aug 6 07:50:13.140065 kernel: Key type fscrypt-provisioning registered Aug 6 07:50:13.140081 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 6 07:50:13.140098 kernel: ima: Allocated hash algorithm: sha1 Aug 6 07:50:13.140119 kernel: ima: No architecture policies found Aug 6 07:50:13.140136 kernel: clk: Disabling unused clocks Aug 6 07:50:13.140154 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 6 07:50:13.140171 kernel: Write protecting the kernel read-only data: 36864k Aug 6 07:50:13.140189 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 6 07:50:13.140236 kernel: Run /init as init process Aug 6 07:50:13.140257 kernel: with arguments: Aug 6 07:50:13.140273 kernel: /init Aug 6 07:50:13.140287 kernel: with environment: Aug 6 07:50:13.140305 kernel: HOME=/ Aug 6 07:50:13.140323 kernel: TERM=linux Aug 6 07:50:13.140339 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 6 07:50:13.140362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:50:13.140384 systemd[1]: Detected virtualization kvm. Aug 6 07:50:13.140403 systemd[1]: Detected architecture x86-64. Aug 6 07:50:13.140422 systemd[1]: Running in initrd. Aug 6 07:50:13.140445 systemd[1]: No hostname configured, using default hostname. Aug 6 07:50:13.140462 systemd[1]: Hostname set to . Aug 6 07:50:13.140482 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:50:13.140497 systemd[1]: Queued start job for default target initrd.target. Aug 6 07:50:13.140515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:50:13.140707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:50:13.140728 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 6 07:50:13.140743 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:50:13.140766 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 6 07:50:13.140781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 6 07:50:13.140807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 6 07:50:13.140838 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 6 07:50:13.140869 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:50:13.140896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:50:13.140920 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:50:13.140948 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:50:13.140971 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:50:13.140995 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:50:13.141023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:50:13.141047 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:50:13.141070 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 6 07:50:13.141098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 6 07:50:13.141121 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:50:13.141145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:50:13.141168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:50:13.141195 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:50:13.141221 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 6 07:50:13.141237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:50:13.141252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 6 07:50:13.141280 systemd[1]: Starting systemd-fsck-usr.service... Aug 6 07:50:13.141304 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:50:13.141328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:50:13.141351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:13.141416 systemd-journald[184]: Collecting audit messages is disabled. Aug 6 07:50:13.141473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 6 07:50:13.141497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:50:13.141524 systemd[1]: Finished systemd-fsck-usr.service. Aug 6 07:50:13.141682 systemd-journald[184]: Journal started Aug 6 07:50:13.141743 systemd-journald[184]: Runtime Journal (/run/log/journal/5c1347f1f9794b6ba347d9cfaad0cdc9) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:50:13.146050 systemd-modules-load[185]: Inserted module 'overlay' Aug 6 07:50:13.152562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 07:50:13.202567 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 6 07:50:13.205857 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 6 07:50:13.223146 kernel: Bridge firewalling registered Aug 6 07:50:13.232390 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:50:13.232512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:50:13.235417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:13.244215 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:50:13.254012 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:50:13.260813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:50:13.263839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:50:13.274410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:50:13.292698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:50:13.306792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:50:13.307761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:50:13.310698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:50:13.318842 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 6 07:50:13.322793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:50:13.348476 dracut-cmdline[217]: dracut-dracut-053 Aug 6 07:50:13.356059 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 07:50:13.390291 systemd-resolved[218]: Positive Trust Anchors: Aug 6 07:50:13.390313 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:50:13.390375 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:50:13.395609 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 6 07:50:13.397959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:50:13.399198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:50:13.490613 kernel: SCSI subsystem initialized Aug 6 07:50:13.504600 kernel: Loading iSCSI transport class v2.0-870. Aug 6 07:50:13.521590 kernel: iscsi: registered transport (tcp) Aug 6 07:50:13.554978 kernel: iscsi: registered transport (qla4xxx) Aug 6 07:50:13.555087 kernel: QLogic iSCSI HBA Driver Aug 6 07:50:13.622873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 6 07:50:13.627859 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 6 07:50:13.682749 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 6 07:50:13.682869 kernel: device-mapper: uevent: version 1.0.3 Aug 6 07:50:13.686590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 6 07:50:13.742596 kernel: raid6: avx2x4 gen() 16416 MB/s Aug 6 07:50:13.760597 kernel: raid6: avx2x2 gen() 16483 MB/s Aug 6 07:50:13.778918 kernel: raid6: avx2x1 gen() 12921 MB/s Aug 6 07:50:13.779011 kernel: raid6: using algorithm avx2x2 gen() 16483 MB/s Aug 6 07:50:13.798143 kernel: raid6: .... xor() 17799 MB/s, rmw enabled Aug 6 07:50:13.798240 kernel: raid6: using avx2x2 recovery algorithm Aug 6 07:50:13.830582 kernel: xor: automatically using best checksumming function avx Aug 6 07:50:14.056574 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 6 07:50:14.073504 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:50:14.093146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:50:14.111338 systemd-udevd[401]: Using default interface naming scheme 'v255'. Aug 6 07:50:14.120041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:50:14.127820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 6 07:50:14.157567 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 6 07:50:14.206816 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:50:14.222129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:50:14.315147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:50:14.326341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 6 07:50:14.366605 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 6 07:50:14.371491 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:50:14.374847 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:50:14.377194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:50:14.386338 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 6 07:50:14.427143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:50:14.440569 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 6 07:50:14.524117 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 6 07:50:14.524341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 6 07:50:14.524363 kernel: GPT:9289727 != 125829119 Aug 6 07:50:14.524382 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 6 07:50:14.524400 kernel: GPT:9289727 != 125829119 Aug 6 07:50:14.524418 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 6 07:50:14.524436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:50:14.524456 kernel: libata version 3.00 loaded. Aug 6 07:50:14.524483 kernel: cryptd: max_cpu_qlen set to 1000 Aug 6 07:50:14.524501 kernel: scsi host0: Virtio SCSI HBA Aug 6 07:50:14.525937 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 6 07:50:14.526266 kernel: scsi host1: ata_piix Aug 6 07:50:14.526464 kernel: scsi host2: ata_piix Aug 6 07:50:14.526948 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 6 07:50:14.526973 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 6 07:50:14.527005 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 6 07:50:14.531521 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Aug 6 07:50:14.539179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:50:14.539382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:50:14.550275 kernel: ACPI: bus type USB registered Aug 6 07:50:14.550347 kernel: usbcore: registered new interface driver usbfs Aug 6 07:50:14.550366 kernel: usbcore: registered new interface driver hub Aug 6 07:50:14.550386 kernel: usbcore: registered new device driver usb Aug 6 07:50:14.541228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:50:14.542329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:50:14.542696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:14.551254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:14.567841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:14.636843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:14.643940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 07:50:14.689011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:50:14.725410 kernel: AVX2 version of gcm_enc/dec engaged. Aug 6 07:50:14.729588 kernel: AES CTR mode by8 optimization enabled Aug 6 07:50:14.746645 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (450) Aug 6 07:50:14.771168 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 6 07:50:14.789590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Aug 6 07:50:14.795170 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 6 07:50:14.798484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 6 07:50:14.829333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 6 07:50:14.839509 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 6 07:50:14.845109 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 6 07:50:14.845379 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 6 07:50:14.845648 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 6 07:50:14.845849 kernel: hub 1-0:1.0: USB hub found Aug 6 07:50:14.846128 kernel: hub 1-0:1.0: 2 ports detected Aug 6 07:50:14.841113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:50:14.854069 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 6 07:50:14.876402 disk-uuid[551]: Primary Header is updated. Aug 6 07:50:14.876402 disk-uuid[551]: Secondary Entries is updated. Aug 6 07:50:14.876402 disk-uuid[551]: Secondary Header is updated. Aug 6 07:50:14.889563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:50:14.900580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:50:14.914638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:50:15.913631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 07:50:15.914915 disk-uuid[552]: The operation has completed successfully. Aug 6 07:50:15.983768 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 6 07:50:15.983886 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 6 07:50:16.008905 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 6 07:50:16.015968 sh[565]: Success Aug 6 07:50:16.040566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 6 07:50:16.142426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 6 07:50:16.151910 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 6 07:50:16.152740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 6 07:50:16.181972 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 6 07:50:16.182066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:50:16.183815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 6 07:50:16.185726 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 6 07:50:16.188196 kernel: BTRFS info (device dm-0): using free space tree Aug 6 07:50:16.204782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 6 07:50:16.206369 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 6 07:50:16.213897 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 6 07:50:16.217801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 6 07:50:16.239254 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:50:16.239349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:50:16.239372 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:50:16.245690 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:50:16.260096 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 6 07:50:16.262836 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:50:16.279362 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 6 07:50:16.285840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 6 07:50:16.378021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:50:16.389983 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:50:16.431555 systemd-networkd[750]: lo: Link UP Aug 6 07:50:16.432807 systemd-networkd[750]: lo: Gained carrier Aug 6 07:50:16.436426 systemd-networkd[750]: Enumeration completed Aug 6 07:50:16.436710 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:50:16.437458 systemd[1]: Reached target network.target - Network. Aug 6 07:50:16.439137 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:50:16.439144 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 6 07:50:16.443510 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:50:16.443517 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 6 07:50:16.446601 systemd-networkd[750]: eth0: Link UP Aug 6 07:50:16.446608 systemd-networkd[750]: eth0: Gained carrier Aug 6 07:50:16.446626 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 6 07:50:16.452976 systemd-networkd[750]: eth1: Link UP Aug 6 07:50:16.452983 systemd-networkd[750]: eth1: Gained carrier Aug 6 07:50:16.453004 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 07:50:16.466673 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Aug 6 07:50:16.474647 systemd-networkd[750]: eth0: DHCPv4 address 64.23.226.177/20, gateway 64.23.224.1 acquired from 169.254.169.253 Aug 6 07:50:16.488194 ignition[667]: Ignition 2.19.0 Aug 6 07:50:16.488214 ignition[667]: Stage: fetch-offline Aug 6 07:50:16.488324 ignition[667]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:16.488343 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:16.488523 ignition[667]: parsed url from cmdline: "" Aug 6 07:50:16.491957 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:50:16.488550 ignition[667]: no config URL provided Aug 6 07:50:16.488558 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:50:16.488569 ignition[667]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:50:16.488577 ignition[667]: failed to fetch config: resource requires networking Aug 6 07:50:16.489018 ignition[667]: Ignition finished successfully Aug 6 07:50:16.502051 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 6 07:50:16.526948 ignition[760]: Ignition 2.19.0 Aug 6 07:50:16.526966 ignition[760]: Stage: fetch Aug 6 07:50:16.527274 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:16.527295 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:16.527458 ignition[760]: parsed url from cmdline: "" Aug 6 07:50:16.527464 ignition[760]: no config URL provided Aug 6 07:50:16.527477 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 07:50:16.527490 ignition[760]: no config at "/usr/lib/ignition/user.ign" Aug 6 07:50:16.527517 ignition[760]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 6 07:50:16.558313 ignition[760]: GET result: OK Aug 6 07:50:16.559087 ignition[760]: parsing config with SHA512: 964c13ed0311ae53b33b4071921bd9660cfdf83b1c12339f620d6e8e8a6b877d3ca411d84e462b8bc1c7a311e716efc1f892f5ca11e687a58b89f423ba5ecf4f Aug 6 07:50:16.569139 unknown[760]: fetched base config from "system" Aug 6 07:50:16.569154 unknown[760]: fetched base config from "system" Aug 6 07:50:16.569166 unknown[760]: fetched user config from "digitalocean" Aug 6 07:50:16.572419 ignition[760]: fetch: fetch complete Aug 6 07:50:16.572443 ignition[760]: fetch: fetch passed Aug 6 07:50:16.572570 ignition[760]: Ignition finished successfully Aug 6 07:50:16.575265 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 6 07:50:16.580932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 6 07:50:16.617737 ignition[767]: Ignition 2.19.0 Aug 6 07:50:16.617752 ignition[767]: Stage: kargs Aug 6 07:50:16.618017 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:16.618030 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:16.620947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 6 07:50:16.619413 ignition[767]: kargs: kargs passed Aug 6 07:50:16.619490 ignition[767]: Ignition finished successfully Aug 6 07:50:16.628924 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 6 07:50:16.650809 ignition[774]: Ignition 2.19.0 Aug 6 07:50:16.650826 ignition[774]: Stage: disks Aug 6 07:50:16.651117 ignition[774]: no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:16.651135 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:16.653036 ignition[774]: disks: disks passed Aug 6 07:50:16.653140 ignition[774]: Ignition finished successfully Aug 6 07:50:16.654987 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 6 07:50:16.657053 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 6 07:50:16.662228 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 6 07:50:16.663489 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:50:16.664741 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:50:16.665928 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:50:16.683897 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 6 07:50:16.707494 systemd-fsck[783]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 6 07:50:16.716630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 6 07:50:16.722790 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 6 07:50:16.873643 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 6 07:50:16.874362 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 6 07:50:16.875820 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 6 07:50:16.885801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:50:16.889955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 6 07:50:16.895828 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 6 07:50:16.903054 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 6 07:50:16.913924 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (791) Aug 6 07:50:16.913957 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:50:16.913972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:50:16.913985 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:50:16.915142 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 6 07:50:16.916291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:50:16.923703 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:50:16.923335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 6 07:50:16.928091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:50:16.935886 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 6 07:50:17.039601 coreos-metadata[794]: Aug 06 07:50:17.037 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:50:17.041796 coreos-metadata[793]: Aug 06 07:50:17.041 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:50:17.044983 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Aug 6 07:50:17.050659 coreos-metadata[794]: Aug 06 07:50:17.049 INFO Fetch successful Aug 6 07:50:17.051743 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Aug 6 07:50:17.053910 coreos-metadata[793]: Aug 06 07:50:17.053 INFO Fetch successful Aug 6 07:50:17.062562 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Aug 6 07:50:17.064039 coreos-metadata[794]: Aug 06 07:50:17.063 INFO wrote hostname ci-4012.1.0-0-e9cfdb5e55 to /sysroot/etc/hostname Aug 6 07:50:17.065279 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 6 07:50:17.065433 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 6 07:50:17.067813 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:50:17.073579 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Aug 6 07:50:17.203009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 6 07:50:17.209811 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 6 07:50:17.213890 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 6 07:50:17.227169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 6 07:50:17.229168 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:50:17.274121 ignition[911]: INFO : Ignition 2.19.0 Aug 6 07:50:17.275519 ignition[911]: INFO : Stage: mount Aug 6 07:50:17.275519 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:17.275519 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:17.278468 ignition[911]: INFO : mount: mount passed Aug 6 07:50:17.278468 ignition[911]: INFO : Ignition finished successfully Aug 6 07:50:17.281359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 6 07:50:17.282480 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 6 07:50:17.290763 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 6 07:50:17.309997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 07:50:17.343592 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Aug 6 07:50:17.348809 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 07:50:17.348894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 07:50:17.348923 kernel: BTRFS info (device vda6): using free space tree Aug 6 07:50:17.356569 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 07:50:17.360568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 07:50:17.399991 ignition[942]: INFO : Ignition 2.19.0 Aug 6 07:50:17.399991 ignition[942]: INFO : Stage: files Aug 6 07:50:17.401753 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:17.401753 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:17.404042 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Aug 6 07:50:17.404042 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 6 07:50:17.404042 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 6 07:50:17.407896 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 6 07:50:17.409100 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 6 07:50:17.409100 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 6 07:50:17.408513 unknown[942]: wrote ssh authorized keys file for user: core Aug 6 07:50:17.412104 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:50:17.412104 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 6 07:50:17.442083 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 6 07:50:17.495487 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 07:50:17.495487 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 6 07:50:17.498437 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 6 07:50:17.633155 systemd-networkd[750]: eth1: Gained IPv6LL Aug 6 07:50:17.953166 systemd-networkd[750]: eth0: Gained IPv6LL Aug 6 07:50:18.038508 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 6 07:50:18.110647 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 6 07:50:18.110647 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:50:18.114065 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 6 07:50:18.387145 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 6 07:50:18.670907 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 07:50:18.670907 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 6 07:50:18.674250 ignition[942]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:50:18.674250 ignition[942]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 6 07:50:18.674250 ignition[942]: INFO : files: files passed Aug 6 07:50:18.674250 ignition[942]: INFO : Ignition finished successfully Aug 6 07:50:18.674973 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 6 07:50:18.684921 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 6 07:50:18.694011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 6 07:50:18.701403 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 6 07:50:18.701635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 6 07:50:18.722503 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:50:18.722503 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:50:18.725315 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 07:50:18.728450 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:50:18.729910 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 6 07:50:18.738986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 6 07:50:18.792208 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 6 07:50:18.792460 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 6 07:50:18.794804 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 6 07:50:18.795955 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 6 07:50:18.797672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 6 07:50:18.806903 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 6 07:50:18.840837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:50:18.849934 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 6 07:50:18.871835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:50:18.872849 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:50:18.874417 systemd[1]: Stopped target timers.target - Timer Units. Aug 6 07:50:18.875692 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 6 07:50:18.875946 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 07:50:18.877569 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 6 07:50:18.879066 systemd[1]: Stopped target basic.target - Basic System. Aug 6 07:50:18.880202 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 6 07:50:18.881352 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 07:50:18.882686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 6 07:50:18.883980 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 6 07:50:18.885484 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 07:50:18.886921 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 6 07:50:18.888294 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 6 07:50:18.889655 systemd[1]: Stopped target swap.target - Swaps. Aug 6 07:50:18.890755 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 6 07:50:18.890997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 6 07:50:18.892355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:50:18.893348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:50:18.894791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 6 07:50:18.895154 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:50:18.896370 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 6 07:50:18.896680 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 6 07:50:18.898509 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 6 07:50:18.898750 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 07:50:18.900670 systemd[1]: ignition-files.service: Deactivated successfully. Aug 6 07:50:18.900868 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 6 07:50:18.902135 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 6 07:50:18.902374 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 6 07:50:18.915459 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 6 07:50:18.919017 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 6 07:50:18.919799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 6 07:50:18.920102 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:50:18.924379 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 6 07:50:18.924623 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 07:50:18.940358 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 6 07:50:18.940548 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 6 07:50:18.962580 ignition[995]: INFO : Ignition 2.19.0 Aug 6 07:50:18.962580 ignition[995]: INFO : Stage: umount Aug 6 07:50:18.962580 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 07:50:18.962580 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 6 07:50:18.967346 ignition[995]: INFO : umount: umount passed Aug 6 07:50:18.967346 ignition[995]: INFO : Ignition finished successfully Aug 6 07:50:18.970061 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 6 07:50:18.970220 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 6 07:50:19.000755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 6 07:50:19.002265 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 6 07:50:19.002412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 6 07:50:19.003419 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 6 07:50:19.003495 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 6 07:50:19.004446 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 6 07:50:19.004509 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 6 07:50:19.009358 systemd[1]: Stopped target network.target - Network. Aug 6 07:50:19.021865 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 6 07:50:19.021999 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 07:50:19.023295 systemd[1]: Stopped target paths.target - Path Units. Aug 6 07:50:19.024614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 6 07:50:19.028670 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:50:19.030186 systemd[1]: Stopped target slices.target - Slice Units. Aug 6 07:50:19.032449 systemd[1]: Stopped target sockets.target - Socket Units. Aug 6 07:50:19.046676 systemd[1]: iscsid.socket: Deactivated successfully. Aug 6 07:50:19.046758 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 07:50:19.047339 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 6 07:50:19.047391 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 07:50:19.048026 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 6 07:50:19.048114 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 6 07:50:19.050959 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 6 07:50:19.051047 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 6 07:50:19.052071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 6 07:50:19.052746 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 6 07:50:19.054840 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 6 07:50:19.054999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 6 07:50:19.058619 systemd-networkd[750]: eth0: DHCPv6 lease lost Aug 6 07:50:19.059312 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 6 07:50:19.059472 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 6 07:50:19.061683 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 6 07:50:19.061862 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 6 07:50:19.062935 systemd-networkd[750]: eth1: DHCPv6 lease lost Aug 6 07:50:19.067454 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 6 07:50:19.067666 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 6 07:50:19.070727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 6 07:50:19.070838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:50:19.085371 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 6 07:50:19.086113 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 6 07:50:19.086240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 07:50:19.087114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 07:50:19.087190 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:50:19.087968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 6 07:50:19.088048 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 6 07:50:19.089049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 6 07:50:19.089117 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:50:19.090498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:50:19.108639 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 6 07:50:19.109717 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:50:19.111765 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 6 07:50:19.111897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 6 07:50:19.113240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 6 07:50:19.113342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 6 07:50:19.114111 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 6 07:50:19.114171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:50:19.115712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 6 07:50:19.115797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 6 07:50:19.117399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 6 07:50:19.117587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 6 07:50:19.118692 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 07:50:19.118766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 07:50:19.135519 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 6 07:50:19.136442 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 6 07:50:19.136579 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:50:19.137625 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 6 07:50:19.137721 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:50:19.139222 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 6 07:50:19.139312 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:50:19.142039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:50:19.142130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:19.147361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 6 07:50:19.147568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 6 07:50:19.149353 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 6 07:50:19.160392 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 6 07:50:19.171140 systemd[1]: Switching root. Aug 6 07:50:19.248779 systemd-journald[184]: Journal stopped Aug 6 07:50:20.887968 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 6 07:50:20.888064 kernel: SELinux: policy capability network_peer_controls=1 Aug 6 07:50:20.888082 kernel: SELinux: policy capability open_perms=1 Aug 6 07:50:20.888100 kernel: SELinux: policy capability extended_socket_class=1 Aug 6 07:50:20.888113 kernel: SELinux: policy capability always_check_network=0 Aug 6 07:50:20.888127 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 6 07:50:20.888143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 6 07:50:20.888156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 6 07:50:20.888168 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 6 07:50:20.888187 kernel: audit: type=1403 audit(1722930619.500:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 6 07:50:20.888208 systemd[1]: Successfully loaded SELinux policy in 55.852ms. Aug 6 07:50:20.888233 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.895ms. Aug 6 07:50:20.888248 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 07:50:20.888267 systemd[1]: Detected virtualization kvm. Aug 6 07:50:20.888281 systemd[1]: Detected architecture x86-64. Aug 6 07:50:20.888296 systemd[1]: Detected first boot. Aug 6 07:50:20.888310 systemd[1]: Hostname set to . Aug 6 07:50:20.888324 systemd[1]: Initializing machine ID from VM UUID. Aug 6 07:50:20.888345 zram_generator::config[1038]: No configuration found. Aug 6 07:50:20.888359 systemd[1]: Populated /etc with preset unit settings. Aug 6 07:50:20.888373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 6 07:50:20.888386 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 6 07:50:20.888400 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 6 07:50:20.888416 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 6 07:50:20.888429 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 6 07:50:20.888443 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 6 07:50:20.888463 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 6 07:50:20.888476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 6 07:50:20.888490 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 6 07:50:20.888503 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 6 07:50:20.888521 systemd[1]: Created slice user.slice - User and Session Slice. Aug 6 07:50:20.891631 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 07:50:20.891657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 07:50:20.891672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 6 07:50:20.891686 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 6 07:50:20.891720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 6 07:50:20.891736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 07:50:20.891749 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 6 07:50:20.891763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 07:50:20.891776 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 6 07:50:20.891791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 6 07:50:20.891811 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 6 07:50:20.891825 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 6 07:50:20.891843 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 07:50:20.891857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 07:50:20.891870 systemd[1]: Reached target slices.target - Slice Units. Aug 6 07:50:20.891882 systemd[1]: Reached target swap.target - Swaps. Aug 6 07:50:20.891896 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 6 07:50:20.891909 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 6 07:50:20.891925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 07:50:20.891945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 07:50:20.891959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 07:50:20.891972 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 6 07:50:20.891985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 6 07:50:20.891999 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 6 07:50:20.892013 systemd[1]: Mounting media.mount - External Media Directory... Aug 6 07:50:20.892027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:20.892040 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 6 07:50:20.892053 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 6 07:50:20.892072 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 6 07:50:20.892087 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 6 07:50:20.892100 systemd[1]: Reached target machines.target - Containers. Aug 6 07:50:20.892113 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 6 07:50:20.892127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:50:20.892140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 07:50:20.892153 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 6 07:50:20.892166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:50:20.892185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:50:20.892198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:50:20.892211 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 6 07:50:20.892225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:50:20.892238 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 6 07:50:20.892252 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 6 07:50:20.892266 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 6 07:50:20.892279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 6 07:50:20.892292 systemd[1]: Stopped systemd-fsck-usr.service. Aug 6 07:50:20.892313 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 07:50:20.892326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 07:50:20.892340 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 6 07:50:20.892353 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 6 07:50:20.892366 kernel: fuse: init (API version 7.39) Aug 6 07:50:20.892380 kernel: loop: module loaded Aug 6 07:50:20.892393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 07:50:20.892407 systemd[1]: verity-setup.service: Deactivated successfully. Aug 6 07:50:20.892420 systemd[1]: Stopped verity-setup.service. Aug 6 07:50:20.892439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:20.892453 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 6 07:50:20.892466 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 6 07:50:20.892479 systemd[1]: Mounted media.mount - External Media Directory. Aug 6 07:50:20.892493 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 6 07:50:20.892513 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 6 07:50:20.892569 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 6 07:50:20.892584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 07:50:20.892604 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 6 07:50:20.892619 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 6 07:50:20.892638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:50:20.892652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:50:20.892665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:50:20.892679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:50:20.892693 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 6 07:50:20.892709 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 6 07:50:20.892742 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:50:20.892765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:50:20.892794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 6 07:50:20.892814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 07:50:20.892846 kernel: ACPI: bus type drm_connector registered Aug 6 07:50:20.892873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 6 07:50:20.892912 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 6 07:50:20.892933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 6 07:50:20.893006 systemd-journald[1110]: Collecting audit messages is disabled. Aug 6 07:50:20.893060 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 07:50:20.893100 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 6 07:50:20.893128 systemd-journald[1110]: Journal started Aug 6 07:50:20.893181 systemd-journald[1110]: Runtime Journal (/run/log/journal/5c1347f1f9794b6ba347d9cfaad0cdc9) is 4.9M, max 39.3M, 34.4M free. Aug 6 07:50:20.900837 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 6 07:50:20.391736 systemd[1]: Queued start job for default target multi-user.target. Aug 6 07:50:20.907627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 6 07:50:20.418636 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 6 07:50:20.419223 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 6 07:50:20.917565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:50:20.926464 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 6 07:50:20.931648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:50:20.940566 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 6 07:50:20.946564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:50:20.952620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:50:20.965234 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 6 07:50:20.977717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 07:50:20.995432 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 07:50:20.996637 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 6 07:50:20.999585 kernel: loop0: detected capacity change from 0 to 80568 Aug 6 07:50:21.004752 kernel: block loop0: the capability attribute has been deprecated. Aug 6 07:50:21.006238 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:50:21.006474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:50:21.009603 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 6 07:50:21.010924 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 6 07:50:21.025002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 6 07:50:21.029637 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 6 07:50:21.031035 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 6 07:50:21.049421 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 6 07:50:21.075569 kernel: loop1: detected capacity change from 0 to 8 Aug 6 07:50:21.107828 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 6 07:50:21.110168 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 6 07:50:21.123861 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 6 07:50:21.131568 kernel: loop2: detected capacity change from 0 to 139760 Aug 6 07:50:21.132820 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 6 07:50:21.179500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:50:21.182467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 07:50:21.192806 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 6 07:50:21.198330 systemd-journald[1110]: Time spent on flushing to /var/log/journal/5c1347f1f9794b6ba347d9cfaad0cdc9 is 68.529ms for 1003 entries. Aug 6 07:50:21.198330 systemd-journald[1110]: System Journal (/var/log/journal/5c1347f1f9794b6ba347d9cfaad0cdc9) is 8.0M, max 195.6M, 187.6M free. Aug 6 07:50:21.285802 systemd-journald[1110]: Received client request to flush runtime journal. Aug 6 07:50:21.285891 kernel: loop3: detected capacity change from 0 to 209816 Aug 6 07:50:21.206220 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Aug 6 07:50:21.206246 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Aug 6 07:50:21.223205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 07:50:21.240892 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 6 07:50:21.244559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 6 07:50:21.246207 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 6 07:50:21.266993 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 6 07:50:21.295804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 6 07:50:21.320575 kernel: loop4: detected capacity change from 0 to 80568 Aug 6 07:50:21.341608 kernel: loop5: detected capacity change from 0 to 8 Aug 6 07:50:21.359566 kernel: loop6: detected capacity change from 0 to 139760 Aug 6 07:50:21.380852 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 6 07:50:21.389684 kernel: loop7: detected capacity change from 0 to 209816 Aug 6 07:50:21.392842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 07:50:21.421837 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 6 07:50:21.422638 (sd-merge)[1181]: Merged extensions into '/usr'. Aug 6 07:50:21.436852 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Aug 6 07:50:21.436878 systemd[1]: Reloading... Aug 6 07:50:21.489910 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Aug 6 07:50:21.489933 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Aug 6 07:50:21.680564 zram_generator::config[1209]: No configuration found. Aug 6 07:50:21.973749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:50:22.114665 systemd[1]: Reloading finished in 670 ms. Aug 6 07:50:22.154584 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 6 07:50:22.155973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 07:50:22.176844 systemd[1]: Starting ensure-sysext.service... Aug 6 07:50:22.193248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 07:50:22.228181 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Aug 6 07:50:22.228208 systemd[1]: Reloading... Aug 6 07:50:22.283336 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 6 07:50:22.304496 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 6 07:50:22.305256 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 6 07:50:22.317194 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 6 07:50:22.321141 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 6 07:50:22.321262 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 6 07:50:22.343793 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:50:22.343817 systemd-tmpfiles[1253]: Skipping /boot Aug 6 07:50:22.356571 zram_generator::config[1276]: No configuration found. Aug 6 07:50:22.387048 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 07:50:22.387068 systemd-tmpfiles[1253]: Skipping /boot Aug 6 07:50:22.646872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:50:22.754955 systemd[1]: Reloading finished in 525 ms. Aug 6 07:50:22.778271 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 6 07:50:22.786593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 07:50:22.809977 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:50:22.823946 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 6 07:50:22.827877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 6 07:50:22.841933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 07:50:22.852037 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 6 07:50:22.869047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.869423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:50:22.878631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:50:22.886002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:50:22.899220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:50:22.900364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:50:22.901748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.914141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 6 07:50:22.918397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.919799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:50:22.920180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:50:22.920348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.927227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.927669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:50:22.940172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 07:50:22.943262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:50:22.943621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:22.954869 systemd[1]: Finished ensure-sysext.service. Aug 6 07:50:22.971976 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 6 07:50:23.005100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 6 07:50:23.026359 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 6 07:50:23.029547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:50:23.030669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:50:23.033348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:50:23.033826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:50:23.037711 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 07:50:23.044839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 07:50:23.052545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:50:23.055734 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:50:23.056356 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:50:23.056794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:50:23.061359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:50:23.090656 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 6 07:50:23.106239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 6 07:50:23.113110 augenrules[1359]: No rules Aug 6 07:50:23.117934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 07:50:23.127002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 6 07:50:23.130231 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:50:23.145606 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 6 07:50:23.187286 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 6 07:50:23.214384 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Aug 6 07:50:23.282843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 07:50:23.293986 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 07:50:23.305034 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 6 07:50:23.306178 systemd[1]: Reached target time-set.target - System Time Set. Aug 6 07:50:23.323125 systemd-resolved[1328]: Positive Trust Anchors: Aug 6 07:50:23.323151 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 07:50:23.323206 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 07:50:23.338035 systemd-resolved[1328]: Using system hostname 'ci-4012.1.0-0-e9cfdb5e55'. Aug 6 07:50:23.340886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 07:50:23.341957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 07:50:23.448730 systemd-networkd[1377]: lo: Link UP Aug 6 07:50:23.449351 systemd-networkd[1377]: lo: Gained carrier Aug 6 07:50:23.451367 systemd-networkd[1377]: Enumeration completed Aug 6 07:50:23.451828 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 07:50:23.452851 systemd[1]: Reached target network.target - Network. Aug 6 07:50:23.462921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 6 07:50:23.484453 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 6 07:50:23.497193 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1375) Aug 6 07:50:23.512580 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Aug 6 07:50:23.542882 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 6 07:50:23.543749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:23.544009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 07:50:23.547106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 07:50:23.555980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 07:50:23.559845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 07:50:23.560832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 07:50:23.560904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 07:50:23.560929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 07:50:23.608667 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 07:50:23.608990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 07:50:23.621628 kernel: ISO 9660 Extensions: RRIP_1991A Aug 6 07:50:23.624363 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 6 07:50:23.627678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 07:50:23.627965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 07:50:23.634116 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 07:50:23.638799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 07:50:23.640881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 07:50:23.656672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 07:50:23.708079 systemd-networkd[1377]: eth1: Configuring with /run/systemd/network/10-42:c7:08:5b:d5:0b.network. Aug 6 07:50:23.710427 systemd-networkd[1377]: eth1: Link UP Aug 6 07:50:23.710443 systemd-networkd[1377]: eth1: Gained carrier Aug 6 07:50:23.718338 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:23.720606 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 6 07:50:23.750591 kernel: ACPI: button: Power Button [PWRF] Aug 6 07:50:23.778607 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 6 07:50:23.779830 systemd-networkd[1377]: eth0: Configuring with /run/systemd/network/10-6a:31:66:5e:70:dd.network. Aug 6 07:50:23.780790 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:23.782687 systemd-networkd[1377]: eth0: Link UP Aug 6 07:50:23.782708 systemd-networkd[1377]: eth0: Gained carrier Aug 6 07:50:23.784501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 07:50:23.793062 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:23.836953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 6 07:50:23.855307 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 6 07:50:23.887278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 6 07:50:23.917570 kernel: mousedev: PS/2 mouse device common for all mice Aug 6 07:50:23.918162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:23.921569 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 6 07:50:23.924566 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 6 07:50:23.932570 kernel: Console: switching to colour dummy device 80x25 Aug 6 07:50:23.934139 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 6 07:50:23.934253 kernel: [drm] features: -context_init Aug 6 07:50:23.938582 kernel: [drm] number of scanouts: 1 Aug 6 07:50:23.941575 kernel: [drm] number of cap sets: 0 Aug 6 07:50:23.946570 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 6 07:50:23.971879 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 6 07:50:23.972058 kernel: Console: switching to colour frame buffer device 128x48 Aug 6 07:50:23.989582 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 6 07:50:23.991037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:50:23.991443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:24.008112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:24.038647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 07:50:24.039034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:24.047093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 07:50:24.206787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 07:50:24.219785 kernel: EDAC MC: Ver: 3.0.0 Aug 6 07:50:24.253864 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 6 07:50:24.266887 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 6 07:50:24.287977 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:50:24.328348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 6 07:50:24.330224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 07:50:24.330422 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 07:50:24.330838 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 6 07:50:24.331100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 6 07:50:24.331524 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 6 07:50:24.334386 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 6 07:50:24.334551 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 6 07:50:24.334651 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 6 07:50:24.334710 systemd[1]: Reached target paths.target - Path Units. Aug 6 07:50:24.334795 systemd[1]: Reached target timers.target - Timer Units. Aug 6 07:50:24.337587 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 6 07:50:24.340616 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 6 07:50:24.349221 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 6 07:50:24.357914 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 6 07:50:24.361511 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 6 07:50:24.363749 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 07:50:24.364463 systemd[1]: Reached target basic.target - Basic System. Aug 6 07:50:24.365151 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:50:24.365220 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 6 07:50:24.377575 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 07:50:24.376774 systemd[1]: Starting containerd.service - containerd container runtime... Aug 6 07:50:24.383801 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 6 07:50:24.387760 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 6 07:50:24.398809 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 6 07:50:24.408586 jq[1442]: false Aug 6 07:50:24.408905 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 6 07:50:24.410776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 6 07:50:24.415641 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 6 07:50:24.427811 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 6 07:50:24.433815 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 6 07:50:24.445872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 6 07:50:24.464731 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 6 07:50:24.469890 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 6 07:50:24.470864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 6 07:50:24.478882 systemd[1]: Starting update-engine.service - Update Engine... Aug 6 07:50:24.485789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 6 07:50:24.492759 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 6 07:50:24.505116 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 6 07:50:24.505642 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 6 07:50:24.506145 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 6 07:50:24.507635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 6 07:50:24.517831 systemd[1]: motdgen.service: Deactivated successfully. Aug 6 07:50:24.518644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 6 07:50:24.603760 extend-filesystems[1443]: Found loop4 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found loop5 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found loop6 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found loop7 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda1 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda2 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda3 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found usr Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda4 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda6 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda7 Aug 6 07:50:24.610191 extend-filesystems[1443]: Found vda9 Aug 6 07:50:24.610191 extend-filesystems[1443]: Checking size of /dev/vda9 Aug 6 07:50:24.729518 extend-filesystems[1443]: Resized partition /dev/vda9 Aug 6 07:50:24.739326 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 6 07:50:24.739414 coreos-metadata[1440]: Aug 06 07:50:24.709 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:50:24.739414 coreos-metadata[1440]: Aug 06 07:50:24.727 INFO Fetch successful Aug 6 07:50:24.749967 jq[1454]: true Aug 6 07:50:24.635888 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 6 07:50:24.750377 update_engine[1452]: I0806 07:50:24.634675 1452 main.cc:92] Flatcar Update Engine starting Aug 6 07:50:24.750377 update_engine[1452]: I0806 07:50:24.646286 1452 update_check_scheduler.cc:74] Next update check in 5m39s Aug 6 07:50:24.752735 tar[1469]: linux-amd64/helm Aug 6 07:50:24.640236 dbus-daemon[1441]: [system] SELinux support is enabled Aug 6 07:50:24.753341 extend-filesystems[1481]: resize2fs 1.47.0 (5-Feb-2023) Aug 6 07:50:24.640601 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 6 07:50:24.778141 jq[1476]: true Aug 6 07:50:24.670325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 6 07:50:24.670406 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 6 07:50:24.683199 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 6 07:50:24.683408 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 6 07:50:24.683440 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 6 07:50:24.691686 systemd[1]: Started update-engine.service - Update Engine. Aug 6 07:50:24.730423 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 6 07:50:24.912719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Aug 6 07:50:24.916868 systemd-logind[1450]: New seat seat0. Aug 6 07:50:24.919076 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Aug 6 07:50:24.919102 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 6 07:50:24.922068 systemd[1]: Started systemd-logind.service - User Login Management. Aug 6 07:50:24.999224 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 6 07:50:25.002486 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 6 07:50:25.018563 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 6 07:50:25.091745 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 6 07:50:25.097638 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 6 07:50:25.097638 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 6 07:50:25.097638 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 6 07:50:25.118129 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Aug 6 07:50:25.118129 extend-filesystems[1443]: Found vdb Aug 6 07:50:25.103925 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 6 07:50:25.104223 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 6 07:50:25.137089 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:50:25.147632 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 6 07:50:25.159965 systemd[1]: Starting sshkeys.service... Aug 6 07:50:25.191263 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 6 07:50:25.221605 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 6 07:50:25.244450 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 6 07:50:25.258316 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 6 07:50:25.295500 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 6 07:50:25.301069 systemd[1]: issuegen.service: Deactivated successfully. Aug 6 07:50:25.303003 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 6 07:50:25.312769 systemd-networkd[1377]: eth1: Gained IPv6LL Aug 6 07:50:25.313404 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:25.324248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 6 07:50:25.328425 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 6 07:50:25.343153 systemd[1]: Reached target network-online.target - Network is Online. Aug 6 07:50:25.360894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:50:25.369381 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 6 07:50:25.388620 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 6 07:50:25.411081 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 6 07:50:25.426838 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 6 07:50:25.432863 systemd[1]: Reached target getty.target - Login Prompts. Aug 6 07:50:25.444236 systemd-networkd[1377]: eth0: Gained IPv6LL Aug 6 07:50:25.445001 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:25.466388 coreos-metadata[1526]: Aug 06 07:50:25.466 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 6 07:50:25.490215 coreos-metadata[1526]: Aug 06 07:50:25.488 INFO Fetch successful Aug 6 07:50:25.502669 unknown[1526]: wrote ssh authorized keys file for user: core Aug 6 07:50:25.512225 containerd[1471]: time="2024-08-06T07:50:25.512074797Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 6 07:50:25.522609 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 6 07:50:25.584949 containerd[1471]: time="2024-08-06T07:50:25.584772120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 6 07:50:25.584949 containerd[1471]: time="2024-08-06T07:50:25.584860874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.586412 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Aug 6 07:50:25.588285 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 6 07:50:25.596121 systemd[1]: Finished sshkeys.service. Aug 6 07:50:25.596884 containerd[1471]: time="2024-08-06T07:50:25.596826873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:50:25.596884 containerd[1471]: time="2024-08-06T07:50:25.596878274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.597191 containerd[1471]: time="2024-08-06T07:50:25.597165972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:50:25.597442 containerd[1471]: time="2024-08-06T07:50:25.597193573Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 6 07:50:25.597494 containerd[1471]: time="2024-08-06T07:50:25.597453665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600139113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600199652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600386789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600769055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600796629Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.600811876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.601049516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.601076585Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 6 07:50:25.601189 containerd[1471]: time="2024-08-06T07:50:25.601192276Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 6 07:50:25.602939 containerd[1471]: time="2024-08-06T07:50:25.601228668Z" level=info msg="metadata content store policy set" policy=shared Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.626682335Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.626802692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.626822064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.626931808Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.627656557Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.627691626Z" level=info msg="NRI interface is disabled by configuration." Aug 6 07:50:25.627742 containerd[1471]: time="2024-08-06T07:50:25.627710736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.627972592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628003602Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628024274Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628046360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628071747Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628098244Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628117487Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628136384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628160860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628186112Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628205653Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628224395Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 6 07:50:25.628421 containerd[1471]: time="2024-08-06T07:50:25.628413449Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 6 07:50:25.628880 containerd[1471]: time="2024-08-06T07:50:25.628806480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 6 07:50:25.628880 containerd[1471]: time="2024-08-06T07:50:25.628841292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.628880 containerd[1471]: time="2024-08-06T07:50:25.628872794Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 6 07:50:25.629006 containerd[1471]: time="2024-08-06T07:50:25.628907480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 6 07:50:25.629006 containerd[1471]: time="2024-08-06T07:50:25.628989054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629082 containerd[1471]: time="2024-08-06T07:50:25.629007439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629082 containerd[1471]: time="2024-08-06T07:50:25.629038425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629082 containerd[1471]: time="2024-08-06T07:50:25.629056235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629082 containerd[1471]: time="2024-08-06T07:50:25.629073305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629399 containerd[1471]: time="2024-08-06T07:50:25.629101322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629399 containerd[1471]: time="2024-08-06T07:50:25.629119102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629399 containerd[1471]: time="2024-08-06T07:50:25.629136667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.629399 containerd[1471]: time="2024-08-06T07:50:25.629154546Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629411628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629436267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629453495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629503388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629557949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629579626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629599118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631092 containerd[1471]: time="2024-08-06T07:50:25.629628178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 6 07:50:25.631456 containerd[1471]: time="2024-08-06T07:50:25.630049528Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 6 07:50:25.631456 containerd[1471]: time="2024-08-06T07:50:25.630175738Z" level=info msg="Connect containerd service" Aug 6 07:50:25.631456 containerd[1471]: time="2024-08-06T07:50:25.630214105Z" level=info msg="using legacy CRI server" Aug 6 07:50:25.631456 containerd[1471]: time="2024-08-06T07:50:25.630235158Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 6 07:50:25.631456 containerd[1471]: time="2024-08-06T07:50:25.630671979Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.632711911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.632805512Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.632834933Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.632852118Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.632885631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.633794730Z" level=info msg="Start subscribing containerd event" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.633872409Z" level=info msg="Start recovering state" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.633964958Z" level=info msg="Start event monitor" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.633980770Z" level=info msg="Start snapshots syncer" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.633993752Z" level=info msg="Start cni network conf syncer for default" Aug 6 07:50:25.634017 containerd[1471]: time="2024-08-06T07:50:25.634004238Z" level=info msg="Start streaming server" Aug 6 07:50:25.635048 containerd[1471]: time="2024-08-06T07:50:25.633968452Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 6 07:50:25.635048 containerd[1471]: time="2024-08-06T07:50:25.634276289Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 6 07:50:25.635048 containerd[1471]: time="2024-08-06T07:50:25.634347584Z" level=info msg="containerd successfully booted in 0.129322s" Aug 6 07:50:25.634494 systemd[1]: Started containerd.service - containerd container runtime. Aug 6 07:50:26.029877 tar[1469]: linux-amd64/LICENSE Aug 6 07:50:26.029877 tar[1469]: linux-amd64/README.md Aug 6 07:50:26.045407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 6 07:50:26.852919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:26.856305 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 6 07:50:26.860065 systemd[1]: Startup finished in 1.614s (kernel) + 6.732s (initrd) + 7.414s (userspace) = 15.760s. Aug 6 07:50:26.869005 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:50:27.884567 kubelet[1563]: E0806 07:50:27.884391 1563 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:50:27.889492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:50:27.889782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:50:27.890281 systemd[1]: kubelet.service: Consumed 1.480s CPU time. Aug 6 07:50:34.252576 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 6 07:50:34.258077 systemd[1]: Started sshd@0-64.23.226.177:22-139.178.89.65:48968.service - OpenSSH per-connection server daemon (139.178.89.65:48968). Aug 6 07:50:34.364205 sshd[1576]: Accepted publickey for core from 139.178.89.65 port 48968 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:34.368169 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:34.385173 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 6 07:50:34.398173 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 6 07:50:34.402845 systemd-logind[1450]: New session 1 of user core. Aug 6 07:50:34.424887 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 6 07:50:34.440159 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 6 07:50:34.445860 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:34.648012 systemd[1580]: Queued start job for default target default.target. Aug 6 07:50:34.661622 systemd[1580]: Created slice app.slice - User Application Slice. Aug 6 07:50:34.661679 systemd[1580]: Reached target paths.target - Paths. Aug 6 07:50:34.661714 systemd[1580]: Reached target timers.target - Timers. Aug 6 07:50:34.664095 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 6 07:50:34.684476 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 6 07:50:34.684749 systemd[1580]: Reached target sockets.target - Sockets. Aug 6 07:50:34.684779 systemd[1580]: Reached target basic.target - Basic System. Aug 6 07:50:34.684849 systemd[1580]: Reached target default.target - Main User Target. Aug 6 07:50:34.685059 systemd[1580]: Startup finished in 226ms. Aug 6 07:50:34.685248 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 6 07:50:34.698951 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 6 07:50:34.779198 systemd[1]: Started sshd@1-64.23.226.177:22-139.178.89.65:48980.service - OpenSSH per-connection server daemon (139.178.89.65:48980). Aug 6 07:50:34.831554 sshd[1591]: Accepted publickey for core from 139.178.89.65 port 48980 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:34.833258 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:34.841712 systemd-logind[1450]: New session 2 of user core. Aug 6 07:50:34.850823 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 6 07:50:34.919270 sshd[1591]: pam_unix(sshd:session): session closed for user core Aug 6 07:50:34.934764 systemd[1]: sshd@1-64.23.226.177:22-139.178.89.65:48980.service: Deactivated successfully. Aug 6 07:50:34.937387 systemd[1]: session-2.scope: Deactivated successfully. Aug 6 07:50:34.941979 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Aug 6 07:50:34.946114 systemd[1]: Started sshd@2-64.23.226.177:22-139.178.89.65:48994.service - OpenSSH per-connection server daemon (139.178.89.65:48994). Aug 6 07:50:34.948767 systemd-logind[1450]: Removed session 2. Aug 6 07:50:35.005794 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 48994 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:35.008125 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:35.015576 systemd-logind[1450]: New session 3 of user core. Aug 6 07:50:35.023848 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 6 07:50:35.084743 sshd[1598]: pam_unix(sshd:session): session closed for user core Aug 6 07:50:35.095779 systemd[1]: sshd@2-64.23.226.177:22-139.178.89.65:48994.service: Deactivated successfully. Aug 6 07:50:35.098805 systemd[1]: session-3.scope: Deactivated successfully. Aug 6 07:50:35.101681 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Aug 6 07:50:35.109263 systemd[1]: Started sshd@3-64.23.226.177:22-139.178.89.65:49008.service - OpenSSH per-connection server daemon (139.178.89.65:49008). Aug 6 07:50:35.110827 systemd-logind[1450]: Removed session 3. Aug 6 07:50:35.175277 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 49008 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:35.177389 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:35.184400 systemd-logind[1450]: New session 4 of user core. Aug 6 07:50:35.193005 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 6 07:50:35.259989 sshd[1605]: pam_unix(sshd:session): session closed for user core Aug 6 07:50:35.271127 systemd[1]: sshd@3-64.23.226.177:22-139.178.89.65:49008.service: Deactivated successfully. Aug 6 07:50:35.274499 systemd[1]: session-4.scope: Deactivated successfully. Aug 6 07:50:35.277887 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Aug 6 07:50:35.281164 systemd[1]: Started sshd@4-64.23.226.177:22-139.178.89.65:49018.service - OpenSSH per-connection server daemon (139.178.89.65:49018). Aug 6 07:50:35.283933 systemd-logind[1450]: Removed session 4. Aug 6 07:50:35.336831 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 49018 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:35.339151 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:35.346923 systemd-logind[1450]: New session 5 of user core. Aug 6 07:50:35.352989 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 6 07:50:35.433002 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 6 07:50:35.433495 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:50:35.449405 sudo[1615]: pam_unix(sudo:session): session closed for user root Aug 6 07:50:35.453658 sshd[1612]: pam_unix(sshd:session): session closed for user core Aug 6 07:50:35.468735 systemd[1]: sshd@4-64.23.226.177:22-139.178.89.65:49018.service: Deactivated successfully. Aug 6 07:50:35.471458 systemd[1]: session-5.scope: Deactivated successfully. Aug 6 07:50:35.474868 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Aug 6 07:50:35.489271 systemd[1]: Started sshd@5-64.23.226.177:22-139.178.89.65:49034.service - OpenSSH per-connection server daemon (139.178.89.65:49034). Aug 6 07:50:35.492005 systemd-logind[1450]: Removed session 5. Aug 6 07:50:35.539089 sshd[1620]: Accepted publickey for core from 139.178.89.65 port 49034 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:35.541608 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:35.550670 systemd-logind[1450]: New session 6 of user core. Aug 6 07:50:35.559923 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 6 07:50:35.624125 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 6 07:50:35.624641 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:50:35.630297 sudo[1624]: pam_unix(sudo:session): session closed for user root Aug 6 07:50:35.639044 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 6 07:50:35.639944 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:50:35.665214 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 6 07:50:35.667934 auditctl[1627]: No rules Aug 6 07:50:35.670114 systemd[1]: audit-rules.service: Deactivated successfully. Aug 6 07:50:35.670514 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 6 07:50:35.678144 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 07:50:35.721410 augenrules[1645]: No rules Aug 6 07:50:35.723395 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 07:50:35.725992 sudo[1623]: pam_unix(sudo:session): session closed for user root Aug 6 07:50:35.730083 sshd[1620]: pam_unix(sshd:session): session closed for user core Aug 6 07:50:35.744630 systemd[1]: sshd@5-64.23.226.177:22-139.178.89.65:49034.service: Deactivated successfully. Aug 6 07:50:35.747646 systemd[1]: session-6.scope: Deactivated successfully. Aug 6 07:50:35.750939 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Aug 6 07:50:35.756092 systemd[1]: Started sshd@6-64.23.226.177:22-139.178.89.65:49040.service - OpenSSH per-connection server daemon (139.178.89.65:49040). Aug 6 07:50:35.758807 systemd-logind[1450]: Removed session 6. Aug 6 07:50:35.817939 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 49040 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:50:35.820272 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:50:35.828942 systemd-logind[1450]: New session 7 of user core. Aug 6 07:50:35.838863 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 6 07:50:35.902213 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 6 07:50:35.903604 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 07:50:36.096035 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 6 07:50:36.106388 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 6 07:50:36.661353 dockerd[1666]: time="2024-08-06T07:50:36.661225539Z" level=info msg="Starting up" Aug 6 07:50:36.730439 systemd[1]: var-lib-docker-metacopy\x2dcheck1495758342-merged.mount: Deactivated successfully. Aug 6 07:50:36.773847 dockerd[1666]: time="2024-08-06T07:50:36.773421817Z" level=info msg="Loading containers: start." Aug 6 07:50:36.948556 kernel: Initializing XFRM netlink socket Aug 6 07:50:36.988039 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Aug 6 07:50:37.830819 systemd-resolved[1328]: Clock change detected. Flushing caches. Aug 6 07:50:37.831524 systemd-timesyncd[1342]: Contacted time server 5.161.184.148:123 (2.flatcar.pool.ntp.org). Aug 6 07:50:37.831813 systemd-timesyncd[1342]: Initial clock synchronization to Tue 2024-08-06 07:50:37.830392 UTC. Aug 6 07:50:37.833431 systemd-networkd[1377]: docker0: Link UP Aug 6 07:50:37.865540 dockerd[1666]: time="2024-08-06T07:50:37.865479296Z" level=info msg="Loading containers: done." Aug 6 07:50:37.982198 dockerd[1666]: time="2024-08-06T07:50:37.981195452Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 6 07:50:37.982198 dockerd[1666]: time="2024-08-06T07:50:37.981535581Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 6 07:50:37.982198 dockerd[1666]: time="2024-08-06T07:50:37.981718658Z" level=info msg="Daemon has completed initialization" Aug 6 07:50:38.049422 dockerd[1666]: time="2024-08-06T07:50:38.049225221Z" level=info msg="API listen on /run/docker.sock" Aug 6 07:50:38.050640 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 6 07:50:38.715554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 6 07:50:38.721053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:50:38.913693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:38.927560 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:50:39.038205 kubelet[1808]: E0806 07:50:39.036066 1808 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:50:39.042974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:50:39.043211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:50:39.202874 containerd[1471]: time="2024-08-06T07:50:39.202808157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 6 07:50:40.141460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703696193.mount: Deactivated successfully. Aug 6 07:50:41.944397 containerd[1471]: time="2024-08-06T07:50:41.944303606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:41.946564 containerd[1471]: time="2024-08-06T07:50:41.946493437Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 6 07:50:41.950151 containerd[1471]: time="2024-08-06T07:50:41.950047564Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:41.957574 containerd[1471]: time="2024-08-06T07:50:41.957494633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:41.960355 containerd[1471]: time="2024-08-06T07:50:41.960290061Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 2.757411258s" Aug 6 07:50:41.960729 containerd[1471]: time="2024-08-06T07:50:41.960546181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 6 07:50:41.998293 containerd[1471]: time="2024-08-06T07:50:41.998222819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 6 07:50:44.014244 containerd[1471]: time="2024-08-06T07:50:44.014154867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:44.018780 containerd[1471]: time="2024-08-06T07:50:44.018674710Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 6 07:50:44.023642 containerd[1471]: time="2024-08-06T07:50:44.023537328Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:44.034417 containerd[1471]: time="2024-08-06T07:50:44.034317860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:44.036892 containerd[1471]: time="2024-08-06T07:50:44.036693279Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 2.038410684s" Aug 6 07:50:44.036892 containerd[1471]: time="2024-08-06T07:50:44.036758311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 6 07:50:44.076297 containerd[1471]: time="2024-08-06T07:50:44.076238896Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 6 07:50:45.517716 containerd[1471]: time="2024-08-06T07:50:45.516757199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:45.521688 containerd[1471]: time="2024-08-06T07:50:45.521442759Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 6 07:50:45.528061 containerd[1471]: time="2024-08-06T07:50:45.527954459Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:45.535331 containerd[1471]: time="2024-08-06T07:50:45.535251754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:45.538120 containerd[1471]: time="2024-08-06T07:50:45.537934777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.461633202s" Aug 6 07:50:45.538120 containerd[1471]: time="2024-08-06T07:50:45.538002440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 6 07:50:45.575061 containerd[1471]: time="2024-08-06T07:50:45.574940115Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 6 07:50:47.050360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018400737.mount: Deactivated successfully. Aug 6 07:50:47.729673 containerd[1471]: time="2024-08-06T07:50:47.729262548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:47.734458 containerd[1471]: time="2024-08-06T07:50:47.734359169Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 6 07:50:47.738937 containerd[1471]: time="2024-08-06T07:50:47.738818543Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:47.745177 containerd[1471]: time="2024-08-06T07:50:47.745105844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:47.747156 containerd[1471]: time="2024-08-06T07:50:47.746890731Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 2.17187893s" Aug 6 07:50:47.747156 containerd[1471]: time="2024-08-06T07:50:47.746963355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 6 07:50:47.782605 containerd[1471]: time="2024-08-06T07:50:47.782514011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 6 07:50:48.718181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707729464.mount: Deactivated successfully. Aug 6 07:50:48.741153 containerd[1471]: time="2024-08-06T07:50:48.739684981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:48.743948 containerd[1471]: time="2024-08-06T07:50:48.743865150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 6 07:50:48.747374 containerd[1471]: time="2024-08-06T07:50:48.747261671Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:48.755565 containerd[1471]: time="2024-08-06T07:50:48.755493318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:48.760252 containerd[1471]: time="2024-08-06T07:50:48.760128622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 977.549791ms" Aug 6 07:50:48.760252 containerd[1471]: time="2024-08-06T07:50:48.760241506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 6 07:50:48.815234 containerd[1471]: time="2024-08-06T07:50:48.815178547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 6 07:50:49.294249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 6 07:50:49.301021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:50:49.444846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:49.458785 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 07:50:49.578014 kubelet[1928]: E0806 07:50:49.577834 1928 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 07:50:49.582951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 07:50:49.583187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 07:50:50.241643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695402626.mount: Deactivated successfully. Aug 6 07:50:52.857005 containerd[1471]: time="2024-08-06T07:50:52.856921904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:52.863032 containerd[1471]: time="2024-08-06T07:50:52.862912666Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 6 07:50:52.865839 containerd[1471]: time="2024-08-06T07:50:52.865735002Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:52.874203 containerd[1471]: time="2024-08-06T07:50:52.874100942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:52.878719 containerd[1471]: time="2024-08-06T07:50:52.876330314Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.060853694s" Aug 6 07:50:52.878719 containerd[1471]: time="2024-08-06T07:50:52.876394943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 6 07:50:52.914517 containerd[1471]: time="2024-08-06T07:50:52.914140100Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 6 07:50:54.025669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078050230.mount: Deactivated successfully. Aug 6 07:50:54.702021 containerd[1471]: time="2024-08-06T07:50:54.701898026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:54.712683 containerd[1471]: time="2024-08-06T07:50:54.712540083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 6 07:50:54.733905 containerd[1471]: time="2024-08-06T07:50:54.733805335Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:54.743963 containerd[1471]: time="2024-08-06T07:50:54.743817684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:50:54.745650 containerd[1471]: time="2024-08-06T07:50:54.744676643Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.830466612s" Aug 6 07:50:54.745650 containerd[1471]: time="2024-08-06T07:50:54.744765085Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 6 07:50:57.805761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:57.819151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:50:57.860539 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Aug 6 07:50:57.860561 systemd[1]: Reloading... Aug 6 07:50:58.000621 zram_generator::config[2093]: No configuration found. Aug 6 07:50:58.263662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:50:58.425137 systemd[1]: Reloading finished in 563 ms. Aug 6 07:50:58.503643 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 6 07:50:58.503804 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 6 07:50:58.504499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:58.511091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:50:58.689892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:50:58.704537 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:50:58.784354 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:50:58.784354 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:50:58.784354 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:50:58.784991 kubelet[2150]: I0806 07:50:58.784488 2150 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:50:59.428005 kubelet[2150]: I0806 07:50:59.427949 2150 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:50:59.428005 kubelet[2150]: I0806 07:50:59.427989 2150 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:50:59.428306 kubelet[2150]: I0806 07:50:59.428288 2150 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:50:59.480647 kubelet[2150]: I0806 07:50:59.480335 2150 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:50:59.480826 kubelet[2150]: E0806 07:50:59.480719 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.226.177:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.504924 kubelet[2150]: I0806 07:50:59.504860 2150 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:50:59.509619 kubelet[2150]: I0806 07:50:59.508973 2150 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:50:59.509894 kubelet[2150]: I0806 07:50:59.509868 2150 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:50:59.512317 kubelet[2150]: I0806 07:50:59.511969 2150 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:50:59.512317 kubelet[2150]: I0806 07:50:59.512016 2150 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:50:59.514536 kubelet[2150]: I0806 07:50:59.514499 2150 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:50:59.520444 kubelet[2150]: I0806 07:50:59.520394 2150 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:50:59.521218 kubelet[2150]: I0806 07:50:59.520670 2150 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:50:59.521218 kubelet[2150]: I0806 07:50:59.520738 2150 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:50:59.521218 kubelet[2150]: I0806 07:50:59.520772 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:50:59.525481 kubelet[2150]: W0806 07:50:59.525022 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.525481 kubelet[2150]: E0806 07:50:59.525200 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.525481 kubelet[2150]: W0806 07:50:59.525319 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.226.177:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.525481 kubelet[2150]: E0806 07:50:59.525363 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.226.177:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.526380 kubelet[2150]: I0806 07:50:59.526021 2150 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 07:50:59.532073 kubelet[2150]: W0806 07:50:59.531318 2150 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 6 07:50:59.532730 kubelet[2150]: I0806 07:50:59.532683 2150 server.go:1232] "Started kubelet" Aug 6 07:50:59.537660 kubelet[2150]: I0806 07:50:59.537122 2150 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:50:59.538560 kubelet[2150]: I0806 07:50:59.538503 2150 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:50:59.542186 kubelet[2150]: I0806 07:50:59.541503 2150 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:50:59.542186 kubelet[2150]: I0806 07:50:59.541838 2150 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:50:59.545951 kubelet[2150]: I0806 07:50:59.545899 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:50:59.548616 kubelet[2150]: E0806 07:50:59.548267 2150 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:50:59.548616 kubelet[2150]: E0806 07:50:59.548310 2150 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:50:59.548616 kubelet[2150]: E0806 07:50:59.548412 2150 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.1.0-0-e9cfdb5e55.17e9144e6e128307", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.1.0-0-e9cfdb5e55", UID:"ci-4012.1.0-0-e9cfdb5e55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.1.0-0-e9cfdb5e55"}, FirstTimestamp:time.Date(2024, time.August, 6, 7, 50, 59, 532636935, time.Local), LastTimestamp:time.Date(2024, time.August, 6, 7, 50, 59, 532636935, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.1.0-0-e9cfdb5e55"}': 'Post "https://64.23.226.177:6443/api/v1/namespaces/default/events": dial tcp 64.23.226.177:6443: connect: connection refused'(may retry after sleeping) Aug 6 07:50:59.551887 kubelet[2150]: E0806 07:50:59.551853 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:50:59.552110 kubelet[2150]: I0806 07:50:59.552098 2150 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:50:59.552613 kubelet[2150]: I0806 07:50:59.552319 2150 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:50:59.552613 kubelet[2150]: I0806 07:50:59.552428 2150 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:50:59.553205 kubelet[2150]: W0806 07:50:59.553150 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.226.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.553400 kubelet[2150]: E0806 07:50:59.553387 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.226.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.554309 kubelet[2150]: E0806 07:50:59.554285 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.226.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-0-e9cfdb5e55?timeout=10s\": dial tcp 64.23.226.177:6443: connect: connection refused" interval="200ms" Aug 6 07:50:59.600244 kubelet[2150]: I0806 07:50:59.599830 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:50:59.602848 kubelet[2150]: I0806 07:50:59.602029 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:50:59.602848 kubelet[2150]: I0806 07:50:59.602104 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:50:59.602848 kubelet[2150]: I0806 07:50:59.602154 2150 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:50:59.602848 kubelet[2150]: E0806 07:50:59.602269 2150 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:50:59.613861 kubelet[2150]: W0806 07:50:59.613626 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.613861 kubelet[2150]: E0806 07:50:59.613734 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:50:59.626635 kubelet[2150]: I0806 07:50:59.626551 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:50:59.627196 kubelet[2150]: I0806 07:50:59.626892 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:50:59.627196 kubelet[2150]: I0806 07:50:59.626936 2150 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:50:59.654518 kubelet[2150]: I0806 07:50:59.654473 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:50:59.655044 kubelet[2150]: E0806 07:50:59.655023 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.226.177:6443/api/v1/nodes\": dial tcp 64.23.226.177:6443: connect: connection refused" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:50:59.704470 kubelet[2150]: E0806 07:50:59.702355 2150 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 07:50:59.755654 kubelet[2150]: E0806 07:50:59.755567 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.226.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-0-e9cfdb5e55?timeout=10s\": dial tcp 64.23.226.177:6443: connect: connection refused" interval="400ms" Aug 6 07:50:59.856468 kubelet[2150]: I0806 07:50:59.856394 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:50:59.857305 kubelet[2150]: E0806 07:50:59.856952 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.226.177:6443/api/v1/nodes\": dial tcp 64.23.226.177:6443: connect: connection refused" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:50:59.871661 kubelet[2150]: I0806 07:50:59.871521 2150 policy_none.go:49] "None policy: Start" Aug 6 07:50:59.872442 kubelet[2150]: I0806 07:50:59.872403 2150 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:50:59.872442 kubelet[2150]: I0806 07:50:59.872444 2150 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:50:59.903300 kubelet[2150]: E0806 07:50:59.903245 2150 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 07:50:59.920643 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 6 07:50:59.938098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 6 07:50:59.943841 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 6 07:50:59.955303 kubelet[2150]: I0806 07:50:59.955180 2150 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:50:59.955953 kubelet[2150]: I0806 07:50:59.955548 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:50:59.960327 kubelet[2150]: E0806 07:50:59.958737 2150 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:00.156997 kubelet[2150]: E0806 07:51:00.156930 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.226.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-0-e9cfdb5e55?timeout=10s\": dial tcp 64.23.226.177:6443: connect: connection refused" interval="800ms" Aug 6 07:51:00.259562 kubelet[2150]: I0806 07:51:00.259199 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.260399 kubelet[2150]: E0806 07:51:00.260367 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.226.177:6443/api/v1/nodes\": dial tcp 64.23.226.177:6443: connect: connection refused" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.303773 kubelet[2150]: I0806 07:51:00.303641 2150 topology_manager.go:215] "Topology Admit Handler" podUID="b655b4db010d33f2a81d5f82e78a7af5" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.305842 kubelet[2150]: I0806 07:51:00.305277 2150 topology_manager.go:215] "Topology Admit Handler" podUID="e764233f8ad67e8c935637724d7ad348" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.307672 kubelet[2150]: I0806 07:51:00.307157 2150 topology_manager.go:215] "Topology Admit Handler" podUID="46e8f569fb533616d50eb93705e1600b" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.318724 systemd[1]: Created slice kubepods-burstable-podb655b4db010d33f2a81d5f82e78a7af5.slice - libcontainer container kubepods-burstable-podb655b4db010d33f2a81d5f82e78a7af5.slice. Aug 6 07:51:00.347074 systemd[1]: Created slice kubepods-burstable-pode764233f8ad67e8c935637724d7ad348.slice - libcontainer container kubepods-burstable-pode764233f8ad67e8c935637724d7ad348.slice. Aug 6 07:51:00.357575 kubelet[2150]: I0806 07:51:00.357185 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.357575 kubelet[2150]: I0806 07:51:00.357408 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.357575 kubelet[2150]: I0806 07:51:00.357448 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.357575 kubelet[2150]: I0806 07:51:00.357494 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.357575 kubelet[2150]: I0806 07:51:00.357529 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46e8f569fb533616d50eb93705e1600b-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"46e8f569fb533616d50eb93705e1600b\") " pod="kube-system/kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.358142 kubelet[2150]: I0806 07:51:00.357564 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.358142 kubelet[2150]: I0806 07:51:00.357640 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.358142 kubelet[2150]: I0806 07:51:00.357682 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.358142 kubelet[2150]: I0806 07:51:00.357718 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:00.363066 systemd[1]: Created slice kubepods-burstable-pod46e8f569fb533616d50eb93705e1600b.slice - libcontainer container kubepods-burstable-pod46e8f569fb533616d50eb93705e1600b.slice. Aug 6 07:51:00.569812 kubelet[2150]: W0806 07:51:00.569512 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:00.569812 kubelet[2150]: E0806 07:51:00.569658 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:00.645544 kubelet[2150]: E0806 07:51:00.643690 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:00.645884 containerd[1471]: time="2024-08-06T07:51:00.644802203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-0-e9cfdb5e55,Uid:b655b4db010d33f2a81d5f82e78a7af5,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:00.657535 kubelet[2150]: E0806 07:51:00.656482 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:00.668694 kubelet[2150]: E0806 07:51:00.668324 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:00.669317 containerd[1471]: time="2024-08-06T07:51:00.669241253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55,Uid:e764233f8ad67e8c935637724d7ad348,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:00.669868 containerd[1471]: time="2024-08-06T07:51:00.669262589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-0-e9cfdb5e55,Uid:46e8f569fb533616d50eb93705e1600b,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:00.958882 kubelet[2150]: E0806 07:51:00.958689 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.226.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-0-e9cfdb5e55?timeout=10s\": dial tcp 64.23.226.177:6443: connect: connection refused" interval="1.6s" Aug 6 07:51:01.054762 kubelet[2150]: W0806 07:51:01.054528 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.226.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.054762 kubelet[2150]: E0806 07:51:01.054677 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.226.177:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.078708 kubelet[2150]: I0806 07:51:01.078299 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:01.079793 kubelet[2150]: W0806 07:51:01.079354 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.226.177:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.079793 kubelet[2150]: E0806 07:51:01.079481 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.226.177:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.079793 kubelet[2150]: E0806 07:51:01.079649 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.226.177:6443/api/v1/nodes\": dial tcp 64.23.226.177:6443: connect: connection refused" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:01.165521 kubelet[2150]: W0806 07:51:01.165240 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.165521 kubelet[2150]: E0806 07:51:01.165337 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.662703 kubelet[2150]: E0806 07:51:01.662628 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.226.177:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:01.663365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389389598.mount: Deactivated successfully. Aug 6 07:51:01.723534 containerd[1471]: time="2024-08-06T07:51:01.723396574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:51:01.758323 containerd[1471]: time="2024-08-06T07:51:01.757154238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 6 07:51:01.774789 containerd[1471]: time="2024-08-06T07:51:01.773316634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:51:01.784761 containerd[1471]: time="2024-08-06T07:51:01.784136370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:51:01.793963 containerd[1471]: time="2024-08-06T07:51:01.793752642Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:51:01.809217 containerd[1471]: time="2024-08-06T07:51:01.800124022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:51:01.816524 containerd[1471]: time="2024-08-06T07:51:01.814957075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 07:51:01.822743 containerd[1471]: time="2024-08-06T07:51:01.822349827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 07:51:01.827475 containerd[1471]: time="2024-08-06T07:51:01.827394662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.157708942s" Aug 6 07:51:01.832487 containerd[1471]: time="2024-08-06T07:51:01.832403779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.187453535s" Aug 6 07:51:01.839741 containerd[1471]: time="2024-08-06T07:51:01.839617958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.170033269s" Aug 6 07:51:02.319797 containerd[1471]: time="2024-08-06T07:51:02.318960509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:02.319797 containerd[1471]: time="2024-08-06T07:51:02.319075351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.319797 containerd[1471]: time="2024-08-06T07:51:02.319105112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:02.319797 containerd[1471]: time="2024-08-06T07:51:02.319130265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.321950 containerd[1471]: time="2024-08-06T07:51:02.321384369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:02.321950 containerd[1471]: time="2024-08-06T07:51:02.321509412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.321950 containerd[1471]: time="2024-08-06T07:51:02.321547906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:02.321950 containerd[1471]: time="2024-08-06T07:51:02.321580547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.336669 containerd[1471]: time="2024-08-06T07:51:02.336455383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:02.336891 containerd[1471]: time="2024-08-06T07:51:02.336620964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.336891 containerd[1471]: time="2024-08-06T07:51:02.336652151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:02.336891 containerd[1471]: time="2024-08-06T07:51:02.336676084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:02.386392 systemd[1]: Started cri-containerd-1971787c7e344d1a8a81ad68be8acbd047d1ecc7ffb38b3d0f104c7dc994edb0.scope - libcontainer container 1971787c7e344d1a8a81ad68be8acbd047d1ecc7ffb38b3d0f104c7dc994edb0. Aug 6 07:51:02.405715 systemd[1]: Started cri-containerd-1799b08fc91727cc689840d1868b86c05017036531ce9a947f73908d6bf590b3.scope - libcontainer container 1799b08fc91727cc689840d1868b86c05017036531ce9a947f73908d6bf590b3. Aug 6 07:51:02.428697 systemd[1]: Started cri-containerd-4cee0fc321576de4a7a6fea4bc6c83fe6a48a4b78025e01b3300d061d86e2052.scope - libcontainer container 4cee0fc321576de4a7a6fea4bc6c83fe6a48a4b78025e01b3300d061d86e2052. Aug 6 07:51:02.561305 kubelet[2150]: E0806 07:51:02.561245 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.226.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-0-e9cfdb5e55?timeout=10s\": dial tcp 64.23.226.177:6443: connect: connection refused" interval="3.2s" Aug 6 07:51:02.620077 containerd[1471]: time="2024-08-06T07:51:02.617584556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55,Uid:e764233f8ad67e8c935637724d7ad348,Namespace:kube-system,Attempt:0,} returns sandbox id \"1971787c7e344d1a8a81ad68be8acbd047d1ecc7ffb38b3d0f104c7dc994edb0\"" Aug 6 07:51:02.621655 containerd[1471]: time="2024-08-06T07:51:02.621327564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-0-e9cfdb5e55,Uid:b655b4db010d33f2a81d5f82e78a7af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1799b08fc91727cc689840d1868b86c05017036531ce9a947f73908d6bf590b3\"" Aug 6 07:51:02.624144 kubelet[2150]: E0806 07:51:02.624090 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:02.624144 kubelet[2150]: E0806 07:51:02.624041 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:02.637793 containerd[1471]: time="2024-08-06T07:51:02.636722680Z" level=info msg="CreateContainer within sandbox \"1971787c7e344d1a8a81ad68be8acbd047d1ecc7ffb38b3d0f104c7dc994edb0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 6 07:51:02.638641 containerd[1471]: time="2024-08-06T07:51:02.638327994Z" level=info msg="CreateContainer within sandbox \"1799b08fc91727cc689840d1868b86c05017036531ce9a947f73908d6bf590b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 6 07:51:02.660288 containerd[1471]: time="2024-08-06T07:51:02.659383197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-0-e9cfdb5e55,Uid:46e8f569fb533616d50eb93705e1600b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cee0fc321576de4a7a6fea4bc6c83fe6a48a4b78025e01b3300d061d86e2052\"" Aug 6 07:51:02.663067 kubelet[2150]: E0806 07:51:02.663021 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:02.671409 containerd[1471]: time="2024-08-06T07:51:02.671347375Z" level=info msg="CreateContainer within sandbox \"4cee0fc321576de4a7a6fea4bc6c83fe6a48a4b78025e01b3300d061d86e2052\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 6 07:51:02.684951 kubelet[2150]: I0806 07:51:02.684884 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:02.687235 kubelet[2150]: E0806 07:51:02.687159 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.226.177:6443/api/v1/nodes\": dial tcp 64.23.226.177:6443: connect: connection refused" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:02.736752 kubelet[2150]: W0806 07:51:02.734980 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:02.736752 kubelet[2150]: E0806 07:51:02.735065 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.226.177:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-0-e9cfdb5e55&limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:02.740538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510339808.mount: Deactivated successfully. Aug 6 07:51:02.750114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680575795.mount: Deactivated successfully. Aug 6 07:51:02.775445 containerd[1471]: time="2024-08-06T07:51:02.775243438Z" level=info msg="CreateContainer within sandbox \"1971787c7e344d1a8a81ad68be8acbd047d1ecc7ffb38b3d0f104c7dc994edb0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5c26639157d3d900d8077455e21da8f358ab352a415c4dee18dbb41003ff83d\"" Aug 6 07:51:02.777636 containerd[1471]: time="2024-08-06T07:51:02.776460701Z" level=info msg="StartContainer for \"d5c26639157d3d900d8077455e21da8f358ab352a415c4dee18dbb41003ff83d\"" Aug 6 07:51:02.804279 containerd[1471]: time="2024-08-06T07:51:02.804191583Z" level=info msg="CreateContainer within sandbox \"1799b08fc91727cc689840d1868b86c05017036531ce9a947f73908d6bf590b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37aa2a18421494cb6f720ddf93b0246c1b3e18eeb63d4c5e6cf5caf8aeae1b9a\"" Aug 6 07:51:02.806646 containerd[1471]: time="2024-08-06T07:51:02.806575399Z" level=info msg="StartContainer for \"37aa2a18421494cb6f720ddf93b0246c1b3e18eeb63d4c5e6cf5caf8aeae1b9a\"" Aug 6 07:51:02.810972 containerd[1471]: time="2024-08-06T07:51:02.810881978Z" level=info msg="CreateContainer within sandbox \"4cee0fc321576de4a7a6fea4bc6c83fe6a48a4b78025e01b3300d061d86e2052\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9dcb9aa4c2c5fce215de0a5eb9a1dd8b142894d9c4d491b335c0efed24ab721b\"" Aug 6 07:51:02.811872 containerd[1471]: time="2024-08-06T07:51:02.811682251Z" level=info msg="StartContainer for \"9dcb9aa4c2c5fce215de0a5eb9a1dd8b142894d9c4d491b335c0efed24ab721b\"" Aug 6 07:51:02.852091 systemd[1]: Started cri-containerd-d5c26639157d3d900d8077455e21da8f358ab352a415c4dee18dbb41003ff83d.scope - libcontainer container d5c26639157d3d900d8077455e21da8f358ab352a415c4dee18dbb41003ff83d. Aug 6 07:51:02.908031 systemd[1]: Started cri-containerd-37aa2a18421494cb6f720ddf93b0246c1b3e18eeb63d4c5e6cf5caf8aeae1b9a.scope - libcontainer container 37aa2a18421494cb6f720ddf93b0246c1b3e18eeb63d4c5e6cf5caf8aeae1b9a. Aug 6 07:51:02.916027 systemd[1]: Started cri-containerd-9dcb9aa4c2c5fce215de0a5eb9a1dd8b142894d9c4d491b335c0efed24ab721b.scope - libcontainer container 9dcb9aa4c2c5fce215de0a5eb9a1dd8b142894d9c4d491b335c0efed24ab721b. Aug 6 07:51:03.101107 containerd[1471]: time="2024-08-06T07:51:03.100891208Z" level=info msg="StartContainer for \"d5c26639157d3d900d8077455e21da8f358ab352a415c4dee18dbb41003ff83d\" returns successfully" Aug 6 07:51:03.101433 containerd[1471]: time="2024-08-06T07:51:03.101229484Z" level=info msg="StartContainer for \"37aa2a18421494cb6f720ddf93b0246c1b3e18eeb63d4c5e6cf5caf8aeae1b9a\" returns successfully" Aug 6 07:51:03.146642 kubelet[2150]: W0806 07:51:03.146126 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:03.146642 kubelet[2150]: E0806 07:51:03.146192 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.226.177:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.226.177:6443: connect: connection refused Aug 6 07:51:03.239234 containerd[1471]: time="2024-08-06T07:51:03.238444962Z" level=info msg="StartContainer for \"9dcb9aa4c2c5fce215de0a5eb9a1dd8b142894d9c4d491b335c0efed24ab721b\" returns successfully" Aug 6 07:51:03.667812 kubelet[2150]: E0806 07:51:03.666902 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:03.675778 kubelet[2150]: E0806 07:51:03.674366 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:03.686091 kubelet[2150]: E0806 07:51:03.686041 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:04.686760 kubelet[2150]: E0806 07:51:04.686721 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:04.690761 kubelet[2150]: E0806 07:51:04.690689 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:05.690340 kubelet[2150]: E0806 07:51:05.690255 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:05.890664 kubelet[2150]: I0806 07:51:05.889056 2150 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:05.930826 kubelet[2150]: E0806 07:51:05.930565 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.1.0-0-e9cfdb5e55\" not found" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:05.938702 kubelet[2150]: I0806 07:51:05.938643 2150 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:05.962424 kubelet[2150]: E0806 07:51:05.962197 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.063228 kubelet[2150]: E0806 07:51:06.063167 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.163785 kubelet[2150]: E0806 07:51:06.163710 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.264707 kubelet[2150]: E0806 07:51:06.264523 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.364888 kubelet[2150]: E0806 07:51:06.364714 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.465551 kubelet[2150]: E0806 07:51:06.465488 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.566646 kubelet[2150]: E0806 07:51:06.566435 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.667281 kubelet[2150]: E0806 07:51:06.667218 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.693524 kubelet[2150]: E0806 07:51:06.693487 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:06.769393 kubelet[2150]: E0806 07:51:06.769333 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.870505 kubelet[2150]: E0806 07:51:06.870445 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:06.970930 kubelet[2150]: E0806 07:51:06.970869 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.071937 kubelet[2150]: E0806 07:51:07.071856 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.099776 kubelet[2150]: E0806 07:51:07.099723 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:07.173147 kubelet[2150]: E0806 07:51:07.172927 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.274562 kubelet[2150]: E0806 07:51:07.273644 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.374401 kubelet[2150]: E0806 07:51:07.374350 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.475278 kubelet[2150]: E0806 07:51:07.475135 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.576264 kubelet[2150]: E0806 07:51:07.576204 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.676722 kubelet[2150]: E0806 07:51:07.676654 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.777541 kubelet[2150]: E0806 07:51:07.777355 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.878242 kubelet[2150]: E0806 07:51:07.878121 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:07.979304 kubelet[2150]: E0806 07:51:07.979236 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.080461 kubelet[2150]: E0806 07:51:08.080350 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.181309 kubelet[2150]: E0806 07:51:08.181231 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.282173 kubelet[2150]: E0806 07:51:08.282115 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.383104 kubelet[2150]: E0806 07:51:08.382936 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.483405 kubelet[2150]: E0806 07:51:08.483340 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.584679 kubelet[2150]: E0806 07:51:08.584581 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.685377 kubelet[2150]: E0806 07:51:08.685207 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.786414 kubelet[2150]: E0806 07:51:08.786274 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:08.873174 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-7.scope)... Aug 6 07:51:08.873759 systemd[1]: Reloading... Aug 6 07:51:08.886981 kubelet[2150]: E0806 07:51:08.886928 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-0-e9cfdb5e55\" not found" Aug 6 07:51:09.040627 zram_generator::config[2466]: No configuration found. Aug 6 07:51:09.248693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 07:51:09.442985 systemd[1]: Reloading finished in 568 ms. Aug 6 07:51:09.506032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:51:09.519249 systemd[1]: kubelet.service: Deactivated successfully. Aug 6 07:51:09.519714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:51:09.519960 systemd[1]: kubelet.service: Consumed 1.368s CPU time, 111.8M memory peak, 0B memory swap peak. Aug 6 07:51:09.531166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 07:51:09.738353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 07:51:09.754291 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 07:51:09.881618 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:51:09.881618 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 07:51:09.881618 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 07:51:09.881618 kubelet[2511]: I0806 07:51:09.881018 2511 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 07:51:09.888852 kubelet[2511]: I0806 07:51:09.888601 2511 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 07:51:09.888852 kubelet[2511]: I0806 07:51:09.888778 2511 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 07:51:09.889790 kubelet[2511]: I0806 07:51:09.889751 2511 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 07:51:09.892336 kubelet[2511]: I0806 07:51:09.892015 2511 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 6 07:51:09.893635 kubelet[2511]: I0806 07:51:09.893389 2511 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 07:51:09.904556 kubelet[2511]: I0806 07:51:09.904514 2511 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905153 2511 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905358 2511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905386 2511 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905397 2511 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905446 2511 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:51:09.906225 kubelet[2511]: I0806 07:51:09.905579 2511 kubelet.go:393] "Attempting to sync node with API server" Aug 6 07:51:09.906582 kubelet[2511]: I0806 07:51:09.905648 2511 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 07:51:09.906582 kubelet[2511]: I0806 07:51:09.905683 2511 kubelet.go:309] "Adding apiserver pod source" Aug 6 07:51:09.906582 kubelet[2511]: I0806 07:51:09.905700 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 07:51:09.909633 kubelet[2511]: I0806 07:51:09.908007 2511 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 07:51:09.913211 kubelet[2511]: I0806 07:51:09.911553 2511 server.go:1232] "Started kubelet" Aug 6 07:51:09.916365 kubelet[2511]: I0806 07:51:09.916027 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 07:51:09.929812 kubelet[2511]: E0806 07:51:09.929776 2511 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 07:51:09.930610 kubelet[2511]: E0806 07:51:09.930035 2511 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 07:51:09.931307 sudo[2526]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 6 07:51:09.931802 sudo[2526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 6 07:51:09.940862 kubelet[2511]: I0806 07:51:09.934394 2511 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 07:51:09.940862 kubelet[2511]: I0806 07:51:09.936162 2511 server.go:462] "Adding debug handlers to kubelet server" Aug 6 07:51:09.940862 kubelet[2511]: I0806 07:51:09.937385 2511 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 07:51:09.940862 kubelet[2511]: I0806 07:51:09.937606 2511 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 07:51:09.940862 kubelet[2511]: I0806 07:51:09.940562 2511 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 07:51:09.941280 kubelet[2511]: I0806 07:51:09.941231 2511 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 07:51:09.941757 kubelet[2511]: I0806 07:51:09.941419 2511 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 07:51:09.966107 kubelet[2511]: I0806 07:51:09.964816 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 07:51:09.969791 kubelet[2511]: I0806 07:51:09.968876 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 07:51:09.969791 kubelet[2511]: I0806 07:51:09.968917 2511 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 07:51:09.969791 kubelet[2511]: I0806 07:51:09.968942 2511 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 07:51:09.969791 kubelet[2511]: E0806 07:51:09.969021 2511 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 07:51:10.044804 kubelet[2511]: I0806 07:51:10.044731 2511 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.064657 kubelet[2511]: I0806 07:51:10.064574 2511 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.064860 kubelet[2511]: I0806 07:51:10.064724 2511 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.069529 kubelet[2511]: E0806 07:51:10.069469 2511 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 07:51:10.123133 kubelet[2511]: I0806 07:51:10.123097 2511 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 07:51:10.123133 kubelet[2511]: I0806 07:51:10.123124 2511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 07:51:10.123845 kubelet[2511]: I0806 07:51:10.123150 2511 state_mem.go:36] "Initialized new in-memory state store" Aug 6 07:51:10.123845 kubelet[2511]: I0806 07:51:10.123368 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 6 07:51:10.123845 kubelet[2511]: I0806 07:51:10.123406 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 6 07:51:10.123845 kubelet[2511]: I0806 07:51:10.123416 2511 policy_none.go:49] "None policy: Start" Aug 6 07:51:10.125297 kubelet[2511]: I0806 07:51:10.125261 2511 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 07:51:10.125407 kubelet[2511]: I0806 07:51:10.125306 2511 state_mem.go:35] "Initializing new in-memory state store" Aug 6 07:51:10.126088 kubelet[2511]: I0806 07:51:10.125705 2511 state_mem.go:75] "Updated machine memory state" Aug 6 07:51:10.136322 kubelet[2511]: I0806 07:51:10.134872 2511 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 07:51:10.136482 kubelet[2511]: I0806 07:51:10.136443 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 07:51:10.269790 kubelet[2511]: I0806 07:51:10.269728 2511 topology_manager.go:215] "Topology Admit Handler" podUID="e764233f8ad67e8c935637724d7ad348" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.270068 kubelet[2511]: I0806 07:51:10.269903 2511 topology_manager.go:215] "Topology Admit Handler" podUID="46e8f569fb533616d50eb93705e1600b" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.270143 kubelet[2511]: I0806 07:51:10.270119 2511 topology_manager.go:215] "Topology Admit Handler" podUID="b655b4db010d33f2a81d5f82e78a7af5" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.289805 kubelet[2511]: W0806 07:51:10.288730 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:10.291117 kubelet[2511]: W0806 07:51:10.290770 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:10.291117 kubelet[2511]: W0806 07:51:10.290925 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:10.344625 kubelet[2511]: I0806 07:51:10.343181 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.344625 kubelet[2511]: I0806 07:51:10.343282 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.344625 kubelet[2511]: I0806 07:51:10.343350 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.344625 kubelet[2511]: I0806 07:51:10.343395 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46e8f569fb533616d50eb93705e1600b-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"46e8f569fb533616d50eb93705e1600b\") " pod="kube-system/kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.344625 kubelet[2511]: I0806 07:51:10.343478 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.345132 kubelet[2511]: I0806 07:51:10.343519 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.345132 kubelet[2511]: I0806 07:51:10.343580 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e764233f8ad67e8c935637724d7ad348-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"e764233f8ad67e8c935637724d7ad348\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.345132 kubelet[2511]: I0806 07:51:10.343673 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.345132 kubelet[2511]: I0806 07:51:10.343709 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b655b4db010d33f2a81d5f82e78a7af5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" (UID: \"b655b4db010d33f2a81d5f82e78a7af5\") " pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:10.590419 kubelet[2511]: E0806 07:51:10.590339 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:10.592084 kubelet[2511]: E0806 07:51:10.592024 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:10.593448 kubelet[2511]: E0806 07:51:10.593411 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:10.862781 update_engine[1452]: I0806 07:51:10.861701 1452 update_attempter.cc:509] Updating boot flags... Aug 6 07:51:10.920226 kubelet[2511]: I0806 07:51:10.919265 2511 apiserver.go:52] "Watching apiserver" Aug 6 07:51:10.945894 kubelet[2511]: I0806 07:51:10.943215 2511 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 07:51:10.972866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2557) Aug 6 07:51:10.977867 sudo[2526]: pam_unix(sudo:session): session closed for user root Aug 6 07:51:11.109694 kubelet[2511]: W0806 07:51:11.100371 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:11.109694 kubelet[2511]: E0806 07:51:11.103248 2511 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.1.0-0-e9cfdb5e55\" already exists" pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:11.109694 kubelet[2511]: E0806 07:51:11.103759 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:11.109694 kubelet[2511]: W0806 07:51:11.105190 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:11.109694 kubelet[2511]: E0806 07:51:11.105258 2511 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55\" already exists" pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:11.116543 kubelet[2511]: W0806 07:51:11.115352 2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 07:51:11.119015 kubelet[2511]: E0806 07:51:11.118980 2511 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012.1.0-0-e9cfdb5e55\" already exists" pod="kube-system/kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" Aug 6 07:51:11.119957 kubelet[2511]: E0806 07:51:11.119425 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:11.125691 kubelet[2511]: E0806 07:51:11.125650 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:11.148644 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2557) Aug 6 07:51:11.248418 kubelet[2511]: I0806 07:51:11.248179 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.1.0-0-e9cfdb5e55" podStartSLOduration=1.244343678 podCreationTimestamp="2024-08-06 07:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:11.197079436 +0000 UTC m=+1.435175287" watchObservedRunningTime="2024-08-06 07:51:11.244343678 +0000 UTC m=+1.482439529" Aug 6 07:51:11.302601 kubelet[2511]: I0806 07:51:11.302535 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.1.0-0-e9cfdb5e55" podStartSLOduration=1.302466224 podCreationTimestamp="2024-08-06 07:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:11.248816399 +0000 UTC m=+1.486912241" watchObservedRunningTime="2024-08-06 07:51:11.302466224 +0000 UTC m=+1.540562077" Aug 6 07:51:11.347438 kubelet[2511]: I0806 07:51:11.347263 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.1.0-0-e9cfdb5e55" podStartSLOduration=1.347214785 podCreationTimestamp="2024-08-06 07:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:11.305584946 +0000 UTC m=+1.543680797" watchObservedRunningTime="2024-08-06 07:51:11.347214785 +0000 UTC m=+1.585310636" Aug 6 07:51:12.069685 kubelet[2511]: E0806 07:51:12.068319 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:12.071790 kubelet[2511]: E0806 07:51:12.071551 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:12.073774 kubelet[2511]: E0806 07:51:12.073212 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:13.485154 sudo[1656]: pam_unix(sudo:session): session closed for user root Aug 6 07:51:13.490730 sshd[1653]: pam_unix(sshd:session): session closed for user core Aug 6 07:51:13.498186 systemd[1]: sshd@6-64.23.226.177:22-139.178.89.65:49040.service: Deactivated successfully. Aug 6 07:51:13.501761 systemd[1]: session-7.scope: Deactivated successfully. Aug 6 07:51:13.502122 systemd[1]: session-7.scope: Consumed 6.217s CPU time, 136.0M memory peak, 0B memory swap peak. Aug 6 07:51:13.503363 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Aug 6 07:51:13.505484 systemd-logind[1450]: Removed session 7. Aug 6 07:51:15.021717 kubelet[2511]: E0806 07:51:15.021624 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:15.077035 kubelet[2511]: E0806 07:51:15.076933 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:17.782191 kubelet[2511]: E0806 07:51:17.780369 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:18.083099 kubelet[2511]: E0806 07:51:18.083019 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:20.835933 kubelet[2511]: E0806 07:51:20.835820 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:22.436228 kubelet[2511]: I0806 07:51:22.436169 2511 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 6 07:51:22.437335 kubelet[2511]: I0806 07:51:22.437194 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 6 07:51:22.437403 containerd[1471]: time="2024-08-06T07:51:22.436884375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 6 07:51:22.957798 kubelet[2511]: I0806 07:51:22.957557 2511 topology_manager.go:215] "Topology Admit Handler" podUID="5b4ce886-55fd-4ff9-8771-b54c7231bdc2" podNamespace="kube-system" podName="kube-proxy-8thnv" Aug 6 07:51:22.965668 kubelet[2511]: I0806 07:51:22.964682 2511 topology_manager.go:215] "Topology Admit Handler" podUID="4eee4086-954f-403a-8894-47bbf74e673c" podNamespace="kube-system" podName="cilium-24hpg" Aug 6 07:51:22.982998 systemd[1]: Created slice kubepods-besteffort-pod5b4ce886_55fd_4ff9_8771_b54c7231bdc2.slice - libcontainer container kubepods-besteffort-pod5b4ce886_55fd_4ff9_8771_b54c7231bdc2.slice. Aug 6 07:51:23.008021 systemd[1]: Created slice kubepods-burstable-pod4eee4086_954f_403a_8894_47bbf74e673c.slice - libcontainer container kubepods-burstable-pod4eee4086_954f_403a_8894_47bbf74e673c.slice. Aug 6 07:51:23.042096 kubelet[2511]: I0806 07:51:23.042036 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-kernel\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.042096 kubelet[2511]: I0806 07:51:23.042103 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-hubble-tls\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042144 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-xtables-lock\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042186 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eee4086-954f-403a-8894-47bbf74e673c-cilium-config-path\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042221 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eee4086-954f-403a-8894-47bbf74e673c-clustermesh-secrets\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042257 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b4ce886-55fd-4ff9-8771-b54c7231bdc2-xtables-lock\") pod \"kube-proxy-8thnv\" (UID: \"5b4ce886-55fd-4ff9-8771-b54c7231bdc2\") " pod="kube-system/kube-proxy-8thnv" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042289 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-cgroup\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043570 kubelet[2511]: I0806 07:51:23.042353 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-bpf-maps\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042424 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-net\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042514 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-run\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042572 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-hostproc\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042649 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cx8j\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-kube-api-access-9cx8j\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042703 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cni-path\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.043905 kubelet[2511]: I0806 07:51:23.042740 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-lib-modules\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.044296 kubelet[2511]: I0806 07:51:23.042800 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b4ce886-55fd-4ff9-8771-b54c7231bdc2-lib-modules\") pod \"kube-proxy-8thnv\" (UID: \"5b4ce886-55fd-4ff9-8771-b54c7231bdc2\") " pod="kube-system/kube-proxy-8thnv" Aug 6 07:51:23.044296 kubelet[2511]: I0806 07:51:23.042841 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g7vz\" (UniqueName: \"kubernetes.io/projected/5b4ce886-55fd-4ff9-8771-b54c7231bdc2-kube-api-access-5g7vz\") pod \"kube-proxy-8thnv\" (UID: \"5b4ce886-55fd-4ff9-8771-b54c7231bdc2\") " pod="kube-system/kube-proxy-8thnv" Aug 6 07:51:23.044296 kubelet[2511]: I0806 07:51:23.042876 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-etc-cni-netd\") pod \"cilium-24hpg\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " pod="kube-system/cilium-24hpg" Aug 6 07:51:23.044296 kubelet[2511]: I0806 07:51:23.042953 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b4ce886-55fd-4ff9-8771-b54c7231bdc2-kube-proxy\") pod \"kube-proxy-8thnv\" (UID: \"5b4ce886-55fd-4ff9-8771-b54c7231bdc2\") " pod="kube-system/kube-proxy-8thnv" Aug 6 07:51:23.234622 kubelet[2511]: I0806 07:51:23.233954 2511 topology_manager.go:215] "Topology Admit Handler" podUID="c2614905-9a91-4d95-add5-b63d8bc90613" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-rgr4f" Aug 6 07:51:23.258268 systemd[1]: Created slice kubepods-besteffort-podc2614905_9a91_4d95_add5_b63d8bc90613.slice - libcontainer container kubepods-besteffort-podc2614905_9a91_4d95_add5_b63d8bc90613.slice. Aug 6 07:51:23.301204 kubelet[2511]: E0806 07:51:23.301155 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.305638 containerd[1471]: time="2024-08-06T07:51:23.303200522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8thnv,Uid:5b4ce886-55fd-4ff9-8771-b54c7231bdc2,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:23.315039 kubelet[2511]: E0806 07:51:23.314987 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.315910 containerd[1471]: time="2024-08-06T07:51:23.315855991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24hpg,Uid:4eee4086-954f-403a-8894-47bbf74e673c,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:23.348333 kubelet[2511]: I0806 07:51:23.348284 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2614905-9a91-4d95-add5-b63d8bc90613-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-rgr4f\" (UID: \"c2614905-9a91-4d95-add5-b63d8bc90613\") " pod="kube-system/cilium-operator-6bc8ccdb58-rgr4f" Aug 6 07:51:23.348333 kubelet[2511]: I0806 07:51:23.348346 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4r4\" (UniqueName: \"kubernetes.io/projected/c2614905-9a91-4d95-add5-b63d8bc90613-kube-api-access-6m4r4\") pod \"cilium-operator-6bc8ccdb58-rgr4f\" (UID: \"c2614905-9a91-4d95-add5-b63d8bc90613\") " pod="kube-system/cilium-operator-6bc8ccdb58-rgr4f" Aug 6 07:51:23.429640 containerd[1471]: time="2024-08-06T07:51:23.429044904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:23.429640 containerd[1471]: time="2024-08-06T07:51:23.429138225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.429640 containerd[1471]: time="2024-08-06T07:51:23.429170286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:23.429640 containerd[1471]: time="2024-08-06T07:51:23.429191456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.471686 containerd[1471]: time="2024-08-06T07:51:23.471428586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:23.473867 containerd[1471]: time="2024-08-06T07:51:23.471514673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.473867 containerd[1471]: time="2024-08-06T07:51:23.471895117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:23.473867 containerd[1471]: time="2024-08-06T07:51:23.471920457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.502261 systemd[1]: Started cri-containerd-ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f.scope - libcontainer container ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f. Aug 6 07:51:23.526677 systemd[1]: Started cri-containerd-52f254fd6b25c03f8062323b5b1b39e86164f21bbe03c1064ae10d7f5dfa0563.scope - libcontainer container 52f254fd6b25c03f8062323b5b1b39e86164f21bbe03c1064ae10d7f5dfa0563. Aug 6 07:51:23.565569 kubelet[2511]: E0806 07:51:23.565526 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.566489 containerd[1471]: time="2024-08-06T07:51:23.566409842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rgr4f,Uid:c2614905-9a91-4d95-add5-b63d8bc90613,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:23.578504 containerd[1471]: time="2024-08-06T07:51:23.578022099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24hpg,Uid:4eee4086-954f-403a-8894-47bbf74e673c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\"" Aug 6 07:51:23.580821 kubelet[2511]: E0806 07:51:23.580720 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.586282 containerd[1471]: time="2024-08-06T07:51:23.586129972Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 6 07:51:23.589832 containerd[1471]: time="2024-08-06T07:51:23.589780642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8thnv,Uid:5b4ce886-55fd-4ff9-8771-b54c7231bdc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"52f254fd6b25c03f8062323b5b1b39e86164f21bbe03c1064ae10d7f5dfa0563\"" Aug 6 07:51:23.593416 kubelet[2511]: E0806 07:51:23.593113 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.602229 containerd[1471]: time="2024-08-06T07:51:23.602141160Z" level=info msg="CreateContainer within sandbox \"52f254fd6b25c03f8062323b5b1b39e86164f21bbe03c1064ae10d7f5dfa0563\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 6 07:51:23.647570 containerd[1471]: time="2024-08-06T07:51:23.647415260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:23.647570 containerd[1471]: time="2024-08-06T07:51:23.647509681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.647951 containerd[1471]: time="2024-08-06T07:51:23.647553465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:23.647951 containerd[1471]: time="2024-08-06T07:51:23.647583102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:23.657878 containerd[1471]: time="2024-08-06T07:51:23.657696160Z" level=info msg="CreateContainer within sandbox \"52f254fd6b25c03f8062323b5b1b39e86164f21bbe03c1064ae10d7f5dfa0563\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84d078b05f3310fc7c2bb3adcd4a18b1daf299d989a50c30a1137563b8dafdcf\"" Aug 6 07:51:23.660045 containerd[1471]: time="2024-08-06T07:51:23.659901710Z" level=info msg="StartContainer for \"84d078b05f3310fc7c2bb3adcd4a18b1daf299d989a50c30a1137563b8dafdcf\"" Aug 6 07:51:23.675917 systemd[1]: Started cri-containerd-d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e.scope - libcontainer container d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e. Aug 6 07:51:23.727953 systemd[1]: Started cri-containerd-84d078b05f3310fc7c2bb3adcd4a18b1daf299d989a50c30a1137563b8dafdcf.scope - libcontainer container 84d078b05f3310fc7c2bb3adcd4a18b1daf299d989a50c30a1137563b8dafdcf. Aug 6 07:51:23.790533 containerd[1471]: time="2024-08-06T07:51:23.789279917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rgr4f,Uid:c2614905-9a91-4d95-add5-b63d8bc90613,Namespace:kube-system,Attempt:0,} returns sandbox id \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\"" Aug 6 07:51:23.795823 kubelet[2511]: E0806 07:51:23.793744 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:23.815635 containerd[1471]: time="2024-08-06T07:51:23.813150435Z" level=info msg="StartContainer for \"84d078b05f3310fc7c2bb3adcd4a18b1daf299d989a50c30a1137563b8dafdcf\" returns successfully" Aug 6 07:51:24.108138 kubelet[2511]: E0806 07:51:24.107585 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:24.126625 kubelet[2511]: I0806 07:51:24.124621 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8thnv" podStartSLOduration=2.124549371 podCreationTimestamp="2024-08-06 07:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:24.124137025 +0000 UTC m=+14.362232875" watchObservedRunningTime="2024-08-06 07:51:24.124549371 +0000 UTC m=+14.362645203" Aug 6 07:51:31.620764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195913274.mount: Deactivated successfully. Aug 6 07:51:34.826974 containerd[1471]: time="2024-08-06T07:51:34.824931970Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.238639502s" Aug 6 07:51:34.826974 containerd[1471]: time="2024-08-06T07:51:34.825015704Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 6 07:51:34.826974 containerd[1471]: time="2024-08-06T07:51:34.775580213Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735307" Aug 6 07:51:34.854711 containerd[1471]: time="2024-08-06T07:51:34.853655327Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:51:34.856797 containerd[1471]: time="2024-08-06T07:51:34.856738846Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:51:34.860386 containerd[1471]: time="2024-08-06T07:51:34.860180286Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 6 07:51:34.872836 containerd[1471]: time="2024-08-06T07:51:34.872249181Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 6 07:51:34.946625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133158155.mount: Deactivated successfully. Aug 6 07:51:34.958540 containerd[1471]: time="2024-08-06T07:51:34.958415117Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\"" Aug 6 07:51:34.960038 containerd[1471]: time="2024-08-06T07:51:34.959166507Z" level=info msg="StartContainer for \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\"" Aug 6 07:51:35.104001 systemd[1]: Started cri-containerd-6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4.scope - libcontainer container 6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4. Aug 6 07:51:35.167523 containerd[1471]: time="2024-08-06T07:51:35.167438356Z" level=info msg="StartContainer for \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\" returns successfully" Aug 6 07:51:35.182920 systemd[1]: cri-containerd-6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4.scope: Deactivated successfully. Aug 6 07:51:35.334371 containerd[1471]: time="2024-08-06T07:51:35.302070977Z" level=info msg="shim disconnected" id=6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4 namespace=k8s.io Aug 6 07:51:35.334371 containerd[1471]: time="2024-08-06T07:51:35.334369902Z" level=warning msg="cleaning up after shim disconnected" id=6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4 namespace=k8s.io Aug 6 07:51:35.334371 containerd[1471]: time="2024-08-06T07:51:35.334398797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:51:35.357784 containerd[1471]: time="2024-08-06T07:51:35.357408757Z" level=warning msg="cleanup warnings time=\"2024-08-06T07:51:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 6 07:51:35.938814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4-rootfs.mount: Deactivated successfully. Aug 6 07:51:36.155670 kubelet[2511]: E0806 07:51:36.154057 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:36.173624 containerd[1471]: time="2024-08-06T07:51:36.172647413Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 6 07:51:36.234194 containerd[1471]: time="2024-08-06T07:51:36.231108729Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\"" Aug 6 07:51:36.234194 containerd[1471]: time="2024-08-06T07:51:36.232368234Z" level=info msg="StartContainer for \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\"" Aug 6 07:51:36.301075 systemd[1]: Started cri-containerd-7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48.scope - libcontainer container 7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48. Aug 6 07:51:36.378456 containerd[1471]: time="2024-08-06T07:51:36.378361117Z" level=info msg="StartContainer for \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\" returns successfully" Aug 6 07:51:36.397062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 07:51:36.397644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:51:36.397751 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:51:36.406443 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 07:51:36.406852 systemd[1]: cri-containerd-7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48.scope: Deactivated successfully. Aug 6 07:51:36.471265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 07:51:36.515272 containerd[1471]: time="2024-08-06T07:51:36.514985577Z" level=info msg="shim disconnected" id=7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48 namespace=k8s.io Aug 6 07:51:36.515272 containerd[1471]: time="2024-08-06T07:51:36.515137981Z" level=warning msg="cleaning up after shim disconnected" id=7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48 namespace=k8s.io Aug 6 07:51:36.515272 containerd[1471]: time="2024-08-06T07:51:36.515155021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:51:36.938584 systemd[1]: run-containerd-runc-k8s.io-7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48-runc.75k5M3.mount: Deactivated successfully. Aug 6 07:51:36.938780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48-rootfs.mount: Deactivated successfully. Aug 6 07:51:37.164370 kubelet[2511]: E0806 07:51:37.162212 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:37.168486 containerd[1471]: time="2024-08-06T07:51:37.168425911Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 6 07:51:37.285944 containerd[1471]: time="2024-08-06T07:51:37.285525233Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\"" Aug 6 07:51:37.288619 containerd[1471]: time="2024-08-06T07:51:37.287946733Z" level=info msg="StartContainer for \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\"" Aug 6 07:51:37.305513 containerd[1471]: time="2024-08-06T07:51:37.305438930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:51:37.310326 containerd[1471]: time="2024-08-06T07:51:37.310205942Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Aug 6 07:51:37.313521 containerd[1471]: time="2024-08-06T07:51:37.313464698Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 07:51:37.325187 containerd[1471]: time="2024-08-06T07:51:37.324504839Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.452168984s" Aug 6 07:51:37.325803 containerd[1471]: time="2024-08-06T07:51:37.325413221Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 6 07:51:37.330040 containerd[1471]: time="2024-08-06T07:51:37.329855386Z" level=info msg="CreateContainer within sandbox \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 6 07:51:37.371548 containerd[1471]: time="2024-08-06T07:51:37.370825321Z" level=info msg="CreateContainer within sandbox \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\"" Aug 6 07:51:37.371136 systemd[1]: Started cri-containerd-64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812.scope - libcontainer container 64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812. Aug 6 07:51:37.376472 containerd[1471]: time="2024-08-06T07:51:37.373742478Z" level=info msg="StartContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\"" Aug 6 07:51:37.437943 systemd[1]: Started cri-containerd-776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d.scope - libcontainer container 776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d. Aug 6 07:51:37.447354 containerd[1471]: time="2024-08-06T07:51:37.446866067Z" level=info msg="StartContainer for \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\" returns successfully" Aug 6 07:51:37.450196 systemd[1]: cri-containerd-64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812.scope: Deactivated successfully. Aug 6 07:51:37.549421 containerd[1471]: time="2024-08-06T07:51:37.547795769Z" level=info msg="StartContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" returns successfully" Aug 6 07:51:37.552451 containerd[1471]: time="2024-08-06T07:51:37.552354054Z" level=info msg="shim disconnected" id=64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812 namespace=k8s.io Aug 6 07:51:37.552917 containerd[1471]: time="2024-08-06T07:51:37.552701837Z" level=warning msg="cleaning up after shim disconnected" id=64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812 namespace=k8s.io Aug 6 07:51:37.552917 containerd[1471]: time="2024-08-06T07:51:37.552727255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:51:37.941038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812-rootfs.mount: Deactivated successfully. Aug 6 07:51:38.169649 kubelet[2511]: E0806 07:51:38.169608 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:38.177625 kubelet[2511]: E0806 07:51:38.176763 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:38.183412 containerd[1471]: time="2024-08-06T07:51:38.183120259Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 6 07:51:38.232309 kubelet[2511]: I0806 07:51:38.232150 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-rgr4f" podStartSLOduration=1.698899671 podCreationTimestamp="2024-08-06 07:51:23 +0000 UTC" firstStartedPulling="2024-08-06 07:51:23.796182419 +0000 UTC m=+14.034278263" lastFinishedPulling="2024-08-06 07:51:37.327239529 +0000 UTC m=+27.565335374" observedRunningTime="2024-08-06 07:51:38.229530933 +0000 UTC m=+28.467626786" watchObservedRunningTime="2024-08-06 07:51:38.229956782 +0000 UTC m=+28.468052629" Aug 6 07:51:38.238452 containerd[1471]: time="2024-08-06T07:51:38.238383845Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\"" Aug 6 07:51:38.240405 containerd[1471]: time="2024-08-06T07:51:38.240346456Z" level=info msg="StartContainer for \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\"" Aug 6 07:51:38.335008 systemd[1]: Started cri-containerd-6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b.scope - libcontainer container 6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b. Aug 6 07:51:38.468936 systemd[1]: cri-containerd-6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b.scope: Deactivated successfully. Aug 6 07:51:38.474444 containerd[1471]: time="2024-08-06T07:51:38.474253845Z" level=info msg="StartContainer for \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\" returns successfully" Aug 6 07:51:38.535175 containerd[1471]: time="2024-08-06T07:51:38.534772662Z" level=info msg="shim disconnected" id=6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b namespace=k8s.io Aug 6 07:51:38.535175 containerd[1471]: time="2024-08-06T07:51:38.534931233Z" level=warning msg="cleaning up after shim disconnected" id=6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b namespace=k8s.io Aug 6 07:51:38.535175 containerd[1471]: time="2024-08-06T07:51:38.534946728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:51:38.936550 systemd[1]: run-containerd-runc-k8s.io-6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b-runc.yURgIb.mount: Deactivated successfully. Aug 6 07:51:38.936746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b-rootfs.mount: Deactivated successfully. Aug 6 07:51:39.185286 kubelet[2511]: E0806 07:51:39.185224 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:39.186634 kubelet[2511]: E0806 07:51:39.186175 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:39.194835 containerd[1471]: time="2024-08-06T07:51:39.193816343Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 6 07:51:39.254180 containerd[1471]: time="2024-08-06T07:51:39.254095549Z" level=info msg="CreateContainer within sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\"" Aug 6 07:51:39.255425 containerd[1471]: time="2024-08-06T07:51:39.255377648Z" level=info msg="StartContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\"" Aug 6 07:51:39.334935 systemd[1]: Started cri-containerd-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc.scope - libcontainer container 4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc. Aug 6 07:51:39.388499 containerd[1471]: time="2024-08-06T07:51:39.388195633Z" level=info msg="StartContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" returns successfully" Aug 6 07:51:39.587345 kubelet[2511]: I0806 07:51:39.587300 2511 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 6 07:51:39.641815 kubelet[2511]: I0806 07:51:39.641762 2511 topology_manager.go:215] "Topology Admit Handler" podUID="82159a53-c744-461d-a9f9-d8ad25945bc1" podNamespace="kube-system" podName="coredns-5dd5756b68-2fmfw" Aug 6 07:51:39.644669 kubelet[2511]: I0806 07:51:39.644528 2511 topology_manager.go:215] "Topology Admit Handler" podUID="0c99806e-e8c1-4a36-b7cb-aa3a8180b59c" podNamespace="kube-system" podName="coredns-5dd5756b68-rs5gl" Aug 6 07:51:39.661946 systemd[1]: Created slice kubepods-burstable-pod82159a53_c744_461d_a9f9_d8ad25945bc1.slice - libcontainer container kubepods-burstable-pod82159a53_c744_461d_a9f9_d8ad25945bc1.slice. Aug 6 07:51:39.675577 systemd[1]: Created slice kubepods-burstable-pod0c99806e_e8c1_4a36_b7cb_aa3a8180b59c.slice - libcontainer container kubepods-burstable-pod0c99806e_e8c1_4a36_b7cb_aa3a8180b59c.slice. Aug 6 07:51:39.681414 kubelet[2511]: I0806 07:51:39.681154 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c99806e-e8c1-4a36-b7cb-aa3a8180b59c-config-volume\") pod \"coredns-5dd5756b68-rs5gl\" (UID: \"0c99806e-e8c1-4a36-b7cb-aa3a8180b59c\") " pod="kube-system/coredns-5dd5756b68-rs5gl" Aug 6 07:51:39.681414 kubelet[2511]: I0806 07:51:39.681220 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82159a53-c744-461d-a9f9-d8ad25945bc1-config-volume\") pod \"coredns-5dd5756b68-2fmfw\" (UID: \"82159a53-c744-461d-a9f9-d8ad25945bc1\") " pod="kube-system/coredns-5dd5756b68-2fmfw" Aug 6 07:51:39.681414 kubelet[2511]: I0806 07:51:39.681252 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8xg\" (UniqueName: \"kubernetes.io/projected/0c99806e-e8c1-4a36-b7cb-aa3a8180b59c-kube-api-access-gb8xg\") pod \"coredns-5dd5756b68-rs5gl\" (UID: \"0c99806e-e8c1-4a36-b7cb-aa3a8180b59c\") " pod="kube-system/coredns-5dd5756b68-rs5gl" Aug 6 07:51:39.681414 kubelet[2511]: I0806 07:51:39.681291 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbj8c\" (UniqueName: \"kubernetes.io/projected/82159a53-c744-461d-a9f9-d8ad25945bc1-kube-api-access-qbj8c\") pod \"coredns-5dd5756b68-2fmfw\" (UID: \"82159a53-c744-461d-a9f9-d8ad25945bc1\") " pod="kube-system/coredns-5dd5756b68-2fmfw" Aug 6 07:51:39.942797 systemd[1]: run-containerd-runc-k8s.io-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc-runc.a5oW3p.mount: Deactivated successfully. Aug 6 07:51:39.970648 kubelet[2511]: E0806 07:51:39.969115 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:39.973400 containerd[1471]: time="2024-08-06T07:51:39.972841420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fmfw,Uid:82159a53-c744-461d-a9f9-d8ad25945bc1,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:39.982264 kubelet[2511]: E0806 07:51:39.982213 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:39.983310 containerd[1471]: time="2024-08-06T07:51:39.983259417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rs5gl,Uid:0c99806e-e8c1-4a36-b7cb-aa3a8180b59c,Namespace:kube-system,Attempt:0,}" Aug 6 07:51:40.200904 kubelet[2511]: E0806 07:51:40.200679 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:40.229731 kubelet[2511]: I0806 07:51:40.229662 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-24hpg" podStartSLOduration=6.986227544 podCreationTimestamp="2024-08-06 07:51:22 +0000 UTC" firstStartedPulling="2024-08-06 07:51:23.582730358 +0000 UTC m=+13.820826202" lastFinishedPulling="2024-08-06 07:51:34.826085562 +0000 UTC m=+25.064181396" observedRunningTime="2024-08-06 07:51:40.2251325 +0000 UTC m=+30.463228359" watchObservedRunningTime="2024-08-06 07:51:40.229582738 +0000 UTC m=+30.467678649" Aug 6 07:51:41.201459 kubelet[2511]: E0806 07:51:41.201420 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:42.084571 systemd-networkd[1377]: cilium_host: Link UP Aug 6 07:51:42.085992 systemd-networkd[1377]: cilium_net: Link UP Aug 6 07:51:42.088574 systemd-networkd[1377]: cilium_net: Gained carrier Aug 6 07:51:42.088981 systemd-networkd[1377]: cilium_host: Gained carrier Aug 6 07:51:42.089190 systemd-networkd[1377]: cilium_net: Gained IPv6LL Aug 6 07:51:42.089428 systemd-networkd[1377]: cilium_host: Gained IPv6LL Aug 6 07:51:42.203769 kubelet[2511]: E0806 07:51:42.203700 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:42.278797 systemd-networkd[1377]: cilium_vxlan: Link UP Aug 6 07:51:42.278809 systemd-networkd[1377]: cilium_vxlan: Gained carrier Aug 6 07:51:42.935640 kernel: NET: Registered PF_ALG protocol family Aug 6 07:51:43.899471 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Aug 6 07:51:44.092767 systemd-networkd[1377]: lxc_health: Link UP Aug 6 07:51:44.104970 systemd-networkd[1377]: lxc_health: Gained carrier Aug 6 07:51:44.627224 systemd-networkd[1377]: lxc6246d96747dd: Link UP Aug 6 07:51:44.637173 kernel: eth0: renamed from tmp36219 Aug 6 07:51:44.649570 systemd-networkd[1377]: lxc6246d96747dd: Gained carrier Aug 6 07:51:44.650094 systemd-networkd[1377]: lxc1921d0b51749: Link UP Aug 6 07:51:44.654181 kernel: eth0: renamed from tmp261c1 Aug 6 07:51:44.661822 systemd-networkd[1377]: lxc1921d0b51749: Gained carrier Aug 6 07:51:45.319632 kubelet[2511]: E0806 07:51:45.318202 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:45.946494 systemd-networkd[1377]: lxc_health: Gained IPv6LL Aug 6 07:51:46.010465 systemd-networkd[1377]: lxc6246d96747dd: Gained IPv6LL Aug 6 07:51:46.217018 kubelet[2511]: E0806 07:51:46.216827 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:46.458649 systemd-networkd[1377]: lxc1921d0b51749: Gained IPv6LL Aug 6 07:51:47.219502 kubelet[2511]: E0806 07:51:47.219394 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:51.019449 containerd[1471]: time="2024-08-06T07:51:51.019056415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:51.020019 containerd[1471]: time="2024-08-06T07:51:51.019679066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:51.021448 containerd[1471]: time="2024-08-06T07:51:51.019856297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:51.021448 containerd[1471]: time="2024-08-06T07:51:51.019889446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:51.053715 containerd[1471]: time="2024-08-06T07:51:51.052907855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:51:51.053715 containerd[1471]: time="2024-08-06T07:51:51.053054940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:51.053715 containerd[1471]: time="2024-08-06T07:51:51.053106410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:51:51.053715 containerd[1471]: time="2024-08-06T07:51:51.053129352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:51:51.084784 systemd[1]: run-containerd-runc-k8s.io-261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4-runc.46v8A0.mount: Deactivated successfully. Aug 6 07:51:51.111869 systemd[1]: Started cri-containerd-261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4.scope - libcontainer container 261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4. Aug 6 07:51:51.135929 systemd[1]: Started cri-containerd-36219dd6f71526206d7e7e902e3dee935c6f8fa9862edeb7c8c95a05afbc79e0.scope - libcontainer container 36219dd6f71526206d7e7e902e3dee935c6f8fa9862edeb7c8c95a05afbc79e0. Aug 6 07:51:51.254203 containerd[1471]: time="2024-08-06T07:51:51.254137840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rs5gl,Uid:0c99806e-e8c1-4a36-b7cb-aa3a8180b59c,Namespace:kube-system,Attempt:0,} returns sandbox id \"261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4\"" Aug 6 07:51:51.261174 kubelet[2511]: E0806 07:51:51.259545 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:51.277254 containerd[1471]: time="2024-08-06T07:51:51.276737579Z" level=info msg="CreateContainer within sandbox \"261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:51:51.295653 containerd[1471]: time="2024-08-06T07:51:51.295550141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fmfw,Uid:82159a53-c744-461d-a9f9-d8ad25945bc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"36219dd6f71526206d7e7e902e3dee935c6f8fa9862edeb7c8c95a05afbc79e0\"" Aug 6 07:51:51.298047 kubelet[2511]: E0806 07:51:51.298002 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:51.306458 containerd[1471]: time="2024-08-06T07:51:51.306351673Z" level=info msg="CreateContainer within sandbox \"36219dd6f71526206d7e7e902e3dee935c6f8fa9862edeb7c8c95a05afbc79e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 07:51:51.344705 containerd[1471]: time="2024-08-06T07:51:51.344627881Z" level=info msg="CreateContainer within sandbox \"261c178c446ba441ec12eadc3d191bbc2f5631b0b7d88c26b0d73870ca14b2e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0c7c14a380f2e185705146b6bd6857b117ae5aef534789a371d89ce23a50fc3\"" Aug 6 07:51:51.346157 containerd[1471]: time="2024-08-06T07:51:51.345632648Z" level=info msg="StartContainer for \"c0c7c14a380f2e185705146b6bd6857b117ae5aef534789a371d89ce23a50fc3\"" Aug 6 07:51:51.366005 containerd[1471]: time="2024-08-06T07:51:51.365811420Z" level=info msg="CreateContainer within sandbox \"36219dd6f71526206d7e7e902e3dee935c6f8fa9862edeb7c8c95a05afbc79e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5f54ed86b425c0de5d4370d479fd4c7eea32b89ccc424d36619703d6dcdcb08\"" Aug 6 07:51:51.367505 containerd[1471]: time="2024-08-06T07:51:51.366917125Z" level=info msg="StartContainer for \"a5f54ed86b425c0de5d4370d479fd4c7eea32b89ccc424d36619703d6dcdcb08\"" Aug 6 07:51:51.396035 systemd[1]: Started cri-containerd-c0c7c14a380f2e185705146b6bd6857b117ae5aef534789a371d89ce23a50fc3.scope - libcontainer container c0c7c14a380f2e185705146b6bd6857b117ae5aef534789a371d89ce23a50fc3. Aug 6 07:51:51.420894 systemd[1]: Started cri-containerd-a5f54ed86b425c0de5d4370d479fd4c7eea32b89ccc424d36619703d6dcdcb08.scope - libcontainer container a5f54ed86b425c0de5d4370d479fd4c7eea32b89ccc424d36619703d6dcdcb08. Aug 6 07:51:51.460753 containerd[1471]: time="2024-08-06T07:51:51.460578005Z" level=info msg="StartContainer for \"c0c7c14a380f2e185705146b6bd6857b117ae5aef534789a371d89ce23a50fc3\" returns successfully" Aug 6 07:51:51.484374 containerd[1471]: time="2024-08-06T07:51:51.484310854Z" level=info msg="StartContainer for \"a5f54ed86b425c0de5d4370d479fd4c7eea32b89ccc424d36619703d6dcdcb08\" returns successfully" Aug 6 07:51:52.035416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123492093.mount: Deactivated successfully. Aug 6 07:51:52.242741 kubelet[2511]: E0806 07:51:52.241504 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:52.248252 kubelet[2511]: E0806 07:51:52.247617 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:52.266835 kubelet[2511]: I0806 07:51:52.266215 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rs5gl" podStartSLOduration=29.266158415 podCreationTimestamp="2024-08-06 07:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:52.265885758 +0000 UTC m=+42.503981610" watchObservedRunningTime="2024-08-06 07:51:52.266158415 +0000 UTC m=+42.504254266" Aug 6 07:51:53.250635 kubelet[2511]: E0806 07:51:53.250373 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:53.250635 kubelet[2511]: E0806 07:51:53.250375 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:54.253076 kubelet[2511]: E0806 07:51:54.252763 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:51:54.253076 kubelet[2511]: E0806 07:51:54.252911 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:52:23.606161 systemd[1]: Started sshd@7-64.23.226.177:22-139.178.89.65:47080.service - OpenSSH per-connection server daemon (139.178.89.65:47080). Aug 6 07:52:23.711910 sshd[3905]: Accepted publickey for core from 139.178.89.65 port 47080 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:23.717230 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:23.753739 systemd-logind[1450]: New session 8 of user core. Aug 6 07:52:23.761531 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 6 07:52:24.691694 sshd[3905]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:24.703075 systemd[1]: sshd@7-64.23.226.177:22-139.178.89.65:47080.service: Deactivated successfully. Aug 6 07:52:24.708850 systemd[1]: session-8.scope: Deactivated successfully. Aug 6 07:52:24.712100 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Aug 6 07:52:24.714399 systemd-logind[1450]: Removed session 8. Aug 6 07:52:28.970737 kubelet[2511]: E0806 07:52:28.970460 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:52:29.714186 systemd[1]: Started sshd@8-64.23.226.177:22-139.178.89.65:47092.service - OpenSSH per-connection server daemon (139.178.89.65:47092). Aug 6 07:52:29.780455 sshd[3921]: Accepted publickey for core from 139.178.89.65 port 47092 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:29.783631 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:29.793502 systemd-logind[1450]: New session 9 of user core. Aug 6 07:52:29.801982 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 6 07:52:29.995526 sshd[3921]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:30.002143 systemd[1]: sshd@8-64.23.226.177:22-139.178.89.65:47092.service: Deactivated successfully. Aug 6 07:52:30.006967 systemd[1]: session-9.scope: Deactivated successfully. Aug 6 07:52:30.009043 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Aug 6 07:52:30.011773 systemd-logind[1450]: Removed session 9. Aug 6 07:52:34.971002 kubelet[2511]: E0806 07:52:34.970857 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:52:35.017269 systemd[1]: Started sshd@9-64.23.226.177:22-139.178.89.65:52150.service - OpenSSH per-connection server daemon (139.178.89.65:52150). Aug 6 07:52:35.090409 sshd[3936]: Accepted publickey for core from 139.178.89.65 port 52150 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:35.093327 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:35.103952 systemd-logind[1450]: New session 10 of user core. Aug 6 07:52:35.112013 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 6 07:52:35.283910 sshd[3936]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:35.291254 systemd[1]: sshd@9-64.23.226.177:22-139.178.89.65:52150.service: Deactivated successfully. Aug 6 07:52:35.295145 systemd[1]: session-10.scope: Deactivated successfully. Aug 6 07:52:35.297872 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Aug 6 07:52:35.300133 systemd-logind[1450]: Removed session 10. Aug 6 07:52:36.971140 kubelet[2511]: E0806 07:52:36.971088 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:52:40.305136 systemd[1]: Started sshd@10-64.23.226.177:22-139.178.89.65:52164.service - OpenSSH per-connection server daemon (139.178.89.65:52164). Aug 6 07:52:40.360933 sshd[3949]: Accepted publickey for core from 139.178.89.65 port 52164 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:40.363528 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:40.373300 systemd-logind[1450]: New session 11 of user core. Aug 6 07:52:40.382979 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 6 07:52:40.587328 sshd[3949]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:40.600578 systemd[1]: sshd@10-64.23.226.177:22-139.178.89.65:52164.service: Deactivated successfully. Aug 6 07:52:40.603816 systemd[1]: session-11.scope: Deactivated successfully. Aug 6 07:52:40.606242 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Aug 6 07:52:40.609818 systemd-logind[1450]: Removed session 11. Aug 6 07:52:40.619229 systemd[1]: Started sshd@11-64.23.226.177:22-139.178.89.65:42084.service - OpenSSH per-connection server daemon (139.178.89.65:42084). Aug 6 07:52:40.698893 sshd[3963]: Accepted publickey for core from 139.178.89.65 port 42084 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:40.701756 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:40.711707 systemd-logind[1450]: New session 12 of user core. Aug 6 07:52:40.718007 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 6 07:52:42.004716 sshd[3963]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:42.022568 systemd[1]: sshd@11-64.23.226.177:22-139.178.89.65:42084.service: Deactivated successfully. Aug 6 07:52:42.029951 systemd[1]: session-12.scope: Deactivated successfully. Aug 6 07:52:42.035751 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Aug 6 07:52:42.042203 systemd[1]: Started sshd@12-64.23.226.177:22-139.178.89.65:42088.service - OpenSSH per-connection server daemon (139.178.89.65:42088). Aug 6 07:52:42.045702 systemd-logind[1450]: Removed session 12. Aug 6 07:52:42.139496 sshd[3974]: Accepted publickey for core from 139.178.89.65 port 42088 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:42.142650 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:42.150766 systemd-logind[1450]: New session 13 of user core. Aug 6 07:52:42.161969 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 6 07:52:42.374752 sshd[3974]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:42.380175 systemd[1]: sshd@12-64.23.226.177:22-139.178.89.65:42088.service: Deactivated successfully. Aug 6 07:52:42.384222 systemd[1]: session-13.scope: Deactivated successfully. Aug 6 07:52:42.387476 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Aug 6 07:52:42.389761 systemd-logind[1450]: Removed session 13. Aug 6 07:52:47.395224 systemd[1]: Started sshd@13-64.23.226.177:22-139.178.89.65:42104.service - OpenSSH per-connection server daemon (139.178.89.65:42104). Aug 6 07:52:47.453973 sshd[3987]: Accepted publickey for core from 139.178.89.65 port 42104 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:47.456633 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:47.464732 systemd-logind[1450]: New session 14 of user core. Aug 6 07:52:47.470876 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 6 07:52:47.654648 sshd[3987]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:47.661019 systemd[1]: sshd@13-64.23.226.177:22-139.178.89.65:42104.service: Deactivated successfully. Aug 6 07:52:47.665815 systemd[1]: session-14.scope: Deactivated successfully. Aug 6 07:52:47.667790 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Aug 6 07:52:47.670301 systemd-logind[1450]: Removed session 14. Aug 6 07:52:52.677204 systemd[1]: Started sshd@14-64.23.226.177:22-139.178.89.65:40188.service - OpenSSH per-connection server daemon (139.178.89.65:40188). Aug 6 07:52:52.743453 sshd[4001]: Accepted publickey for core from 139.178.89.65 port 40188 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:52.746295 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:52.755068 systemd-logind[1450]: New session 15 of user core. Aug 6 07:52:52.761991 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 6 07:52:52.944730 sshd[4001]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:52.959705 systemd[1]: sshd@14-64.23.226.177:22-139.178.89.65:40188.service: Deactivated successfully. Aug 6 07:52:52.965121 systemd[1]: session-15.scope: Deactivated successfully. Aug 6 07:52:52.969471 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Aug 6 07:52:52.979293 systemd[1]: Started sshd@15-64.23.226.177:22-139.178.89.65:40200.service - OpenSSH per-connection server daemon (139.178.89.65:40200). Aug 6 07:52:52.982889 systemd-logind[1450]: Removed session 15. Aug 6 07:52:53.054687 sshd[4014]: Accepted publickey for core from 139.178.89.65 port 40200 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:53.057072 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:53.065906 systemd-logind[1450]: New session 16 of user core. Aug 6 07:52:53.080000 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 6 07:52:53.832083 sshd[4014]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:53.845964 systemd[1]: sshd@15-64.23.226.177:22-139.178.89.65:40200.service: Deactivated successfully. Aug 6 07:52:53.849514 systemd[1]: session-16.scope: Deactivated successfully. Aug 6 07:52:53.853221 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Aug 6 07:52:53.857318 systemd-logind[1450]: Removed session 16. Aug 6 07:52:53.863158 systemd[1]: Started sshd@16-64.23.226.177:22-139.178.89.65:40210.service - OpenSSH per-connection server daemon (139.178.89.65:40210). Aug 6 07:52:53.950676 sshd[4025]: Accepted publickey for core from 139.178.89.65 port 40210 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:53.952852 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:53.961376 systemd-logind[1450]: New session 17 of user core. Aug 6 07:52:53.971207 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 6 07:52:54.970528 kubelet[2511]: E0806 07:52:54.970144 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:52:55.159325 sshd[4025]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:55.170100 systemd[1]: sshd@16-64.23.226.177:22-139.178.89.65:40210.service: Deactivated successfully. Aug 6 07:52:55.176273 systemd[1]: session-17.scope: Deactivated successfully. Aug 6 07:52:55.178991 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Aug 6 07:52:55.187304 systemd[1]: Started sshd@17-64.23.226.177:22-139.178.89.65:40226.service - OpenSSH per-connection server daemon (139.178.89.65:40226). Aug 6 07:52:55.199156 systemd-logind[1450]: Removed session 17. Aug 6 07:52:55.273702 sshd[4048]: Accepted publickey for core from 139.178.89.65 port 40226 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:55.276095 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:55.284384 systemd-logind[1450]: New session 18 of user core. Aug 6 07:52:55.288929 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 6 07:52:55.761959 sshd[4048]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:55.773280 systemd[1]: sshd@17-64.23.226.177:22-139.178.89.65:40226.service: Deactivated successfully. Aug 6 07:52:55.777517 systemd[1]: session-18.scope: Deactivated successfully. Aug 6 07:52:55.784052 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Aug 6 07:52:55.793186 systemd[1]: Started sshd@18-64.23.226.177:22-139.178.89.65:40234.service - OpenSSH per-connection server daemon (139.178.89.65:40234). Aug 6 07:52:55.799034 systemd-logind[1450]: Removed session 18. Aug 6 07:52:55.847832 sshd[4059]: Accepted publickey for core from 139.178.89.65 port 40234 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:52:55.850460 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:52:55.858553 systemd-logind[1450]: New session 19 of user core. Aug 6 07:52:55.864965 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 6 07:52:56.015727 sshd[4059]: pam_unix(sshd:session): session closed for user core Aug 6 07:52:56.023358 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Aug 6 07:52:56.025738 systemd[1]: sshd@18-64.23.226.177:22-139.178.89.65:40234.service: Deactivated successfully. Aug 6 07:52:56.030240 systemd[1]: session-19.scope: Deactivated successfully. Aug 6 07:52:56.033932 systemd-logind[1450]: Removed session 19. Aug 6 07:52:57.971416 kubelet[2511]: E0806 07:52:57.971365 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:01.045172 systemd[1]: Started sshd@19-64.23.226.177:22-139.178.89.65:46690.service - OpenSSH per-connection server daemon (139.178.89.65:46690). Aug 6 07:53:01.134625 sshd[4071]: Accepted publickey for core from 139.178.89.65 port 46690 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:01.141336 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:01.152848 systemd-logind[1450]: New session 20 of user core. Aug 6 07:53:01.158918 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 6 07:53:01.435885 sshd[4071]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:01.441788 systemd[1]: sshd@19-64.23.226.177:22-139.178.89.65:46690.service: Deactivated successfully. Aug 6 07:53:01.447396 systemd[1]: session-20.scope: Deactivated successfully. Aug 6 07:53:01.451755 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Aug 6 07:53:01.454873 systemd-logind[1450]: Removed session 20. Aug 6 07:53:01.970412 kubelet[2511]: E0806 07:53:01.970327 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:05.971341 kubelet[2511]: E0806 07:53:05.971033 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:06.451508 systemd[1]: Started sshd@20-64.23.226.177:22-139.178.89.65:46704.service - OpenSSH per-connection server daemon (139.178.89.65:46704). Aug 6 07:53:06.515048 sshd[4087]: Accepted publickey for core from 139.178.89.65 port 46704 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:06.518364 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:06.527547 systemd-logind[1450]: New session 21 of user core. Aug 6 07:53:06.535990 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 6 07:53:06.695341 sshd[4087]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:06.703303 systemd[1]: sshd@20-64.23.226.177:22-139.178.89.65:46704.service: Deactivated successfully. Aug 6 07:53:06.707378 systemd[1]: session-21.scope: Deactivated successfully. Aug 6 07:53:06.710034 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Aug 6 07:53:06.712830 systemd-logind[1450]: Removed session 21. Aug 6 07:53:11.726042 systemd[1]: Started sshd@21-64.23.226.177:22-139.178.89.65:58514.service - OpenSSH per-connection server daemon (139.178.89.65:58514). Aug 6 07:53:11.783945 sshd[4102]: Accepted publickey for core from 139.178.89.65 port 58514 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:11.785027 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:11.793799 systemd-logind[1450]: New session 22 of user core. Aug 6 07:53:11.797915 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 6 07:53:11.963925 sshd[4102]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:11.970862 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Aug 6 07:53:11.972042 systemd[1]: sshd@21-64.23.226.177:22-139.178.89.65:58514.service: Deactivated successfully. Aug 6 07:53:11.978457 systemd[1]: session-22.scope: Deactivated successfully. Aug 6 07:53:11.982559 systemd-logind[1450]: Removed session 22. Aug 6 07:53:16.970464 kubelet[2511]: E0806 07:53:16.970339 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:16.985299 systemd[1]: Started sshd@22-64.23.226.177:22-139.178.89.65:58518.service - OpenSSH per-connection server daemon (139.178.89.65:58518). Aug 6 07:53:17.046898 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 58518 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:17.049513 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:17.057780 systemd-logind[1450]: New session 23 of user core. Aug 6 07:53:17.067932 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 6 07:53:17.225549 sshd[4115]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:17.232885 systemd[1]: sshd@22-64.23.226.177:22-139.178.89.65:58518.service: Deactivated successfully. Aug 6 07:53:17.235966 systemd[1]: session-23.scope: Deactivated successfully. Aug 6 07:53:17.237560 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Aug 6 07:53:17.239796 systemd-logind[1450]: Removed session 23. Aug 6 07:53:22.247161 systemd[1]: Started sshd@23-64.23.226.177:22-139.178.89.65:41970.service - OpenSSH per-connection server daemon (139.178.89.65:41970). Aug 6 07:53:22.314752 sshd[4127]: Accepted publickey for core from 139.178.89.65 port 41970 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:22.317888 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:22.325970 systemd-logind[1450]: New session 24 of user core. Aug 6 07:53:22.332936 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 6 07:53:22.500223 sshd[4127]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:22.509090 systemd[1]: sshd@23-64.23.226.177:22-139.178.89.65:41970.service: Deactivated successfully. Aug 6 07:53:22.514138 systemd[1]: session-24.scope: Deactivated successfully. Aug 6 07:53:22.516359 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Aug 6 07:53:22.517809 systemd-logind[1450]: Removed session 24. Aug 6 07:53:27.521193 systemd[1]: Started sshd@24-64.23.226.177:22-139.178.89.65:41974.service - OpenSSH per-connection server daemon (139.178.89.65:41974). Aug 6 07:53:27.582349 sshd[4142]: Accepted publickey for core from 139.178.89.65 port 41974 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:27.583303 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:27.593430 systemd-logind[1450]: New session 25 of user core. Aug 6 07:53:27.597423 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 6 07:53:27.747676 sshd[4142]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:27.759347 systemd[1]: sshd@24-64.23.226.177:22-139.178.89.65:41974.service: Deactivated successfully. Aug 6 07:53:27.762410 systemd[1]: session-25.scope: Deactivated successfully. Aug 6 07:53:27.764699 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Aug 6 07:53:27.772256 systemd[1]: Started sshd@25-64.23.226.177:22-139.178.89.65:41990.service - OpenSSH per-connection server daemon (139.178.89.65:41990). Aug 6 07:53:27.778339 systemd-logind[1450]: Removed session 25. Aug 6 07:53:27.838555 sshd[4156]: Accepted publickey for core from 139.178.89.65 port 41990 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:27.841023 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:27.851762 systemd-logind[1450]: New session 26 of user core. Aug 6 07:53:27.855961 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 6 07:53:29.373081 kubelet[2511]: I0806 07:53:29.373027 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2fmfw" podStartSLOduration=126.372973821 podCreationTimestamp="2024-08-06 07:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:51:52.315626983 +0000 UTC m=+42.553722827" watchObservedRunningTime="2024-08-06 07:53:29.372973821 +0000 UTC m=+139.611069672" Aug 6 07:53:29.412745 systemd[1]: run-containerd-runc-k8s.io-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc-runc.d7r6U2.mount: Deactivated successfully. Aug 6 07:53:29.454980 containerd[1471]: time="2024-08-06T07:53:29.454866529Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 07:53:29.480243 containerd[1471]: time="2024-08-06T07:53:29.479801456Z" level=info msg="StopContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" with timeout 2 (s)" Aug 6 07:53:29.480243 containerd[1471]: time="2024-08-06T07:53:29.479994436Z" level=info msg="StopContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" with timeout 30 (s)" Aug 6 07:53:29.480822 containerd[1471]: time="2024-08-06T07:53:29.480656954Z" level=info msg="Stop container \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" with signal terminated" Aug 6 07:53:29.480822 containerd[1471]: time="2024-08-06T07:53:29.480726524Z" level=info msg="Stop container \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" with signal terminated" Aug 6 07:53:29.500065 systemd-networkd[1377]: lxc_health: Link DOWN Aug 6 07:53:29.500078 systemd-networkd[1377]: lxc_health: Lost carrier Aug 6 07:53:29.517169 systemd[1]: cri-containerd-776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d.scope: Deactivated successfully. Aug 6 07:53:29.552134 systemd[1]: cri-containerd-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc.scope: Deactivated successfully. Aug 6 07:53:29.552705 systemd[1]: cri-containerd-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc.scope: Consumed 10.911s CPU time. Aug 6 07:53:29.584416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d-rootfs.mount: Deactivated successfully. Aug 6 07:53:29.608727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc-rootfs.mount: Deactivated successfully. Aug 6 07:53:29.610039 containerd[1471]: time="2024-08-06T07:53:29.609946322Z" level=info msg="shim disconnected" id=776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d namespace=k8s.io Aug 6 07:53:29.610039 containerd[1471]: time="2024-08-06T07:53:29.610030234Z" level=warning msg="cleaning up after shim disconnected" id=776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d namespace=k8s.io Aug 6 07:53:29.610791 containerd[1471]: time="2024-08-06T07:53:29.610047752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:29.613472 containerd[1471]: time="2024-08-06T07:53:29.613137071Z" level=info msg="shim disconnected" id=4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc namespace=k8s.io Aug 6 07:53:29.613472 containerd[1471]: time="2024-08-06T07:53:29.613230792Z" level=warning msg="cleaning up after shim disconnected" id=4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc namespace=k8s.io Aug 6 07:53:29.613472 containerd[1471]: time="2024-08-06T07:53:29.613245486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:29.639948 containerd[1471]: time="2024-08-06T07:53:29.639730042Z" level=warning msg="cleanup warnings time=\"2024-08-06T07:53:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 6 07:53:29.661105 containerd[1471]: time="2024-08-06T07:53:29.659907287Z" level=info msg="StopContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" returns successfully" Aug 6 07:53:29.662353 containerd[1471]: time="2024-08-06T07:53:29.662302453Z" level=info msg="StopPodSandbox for \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\"" Aug 6 07:53:29.664223 containerd[1471]: time="2024-08-06T07:53:29.664077259Z" level=info msg="Container to stop \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.670512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e-shm.mount: Deactivated successfully. Aug 6 07:53:29.674918 containerd[1471]: time="2024-08-06T07:53:29.674302358Z" level=info msg="StopContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" returns successfully" Aug 6 07:53:29.675510 containerd[1471]: time="2024-08-06T07:53:29.675477977Z" level=info msg="StopPodSandbox for \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\"" Aug 6 07:53:29.675906 containerd[1471]: time="2024-08-06T07:53:29.675827800Z" level=info msg="Container to stop \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.677296 containerd[1471]: time="2024-08-06T07:53:29.677082819Z" level=info msg="Container to stop \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.677296 containerd[1471]: time="2024-08-06T07:53:29.677117616Z" level=info msg="Container to stop \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.677296 containerd[1471]: time="2024-08-06T07:53:29.677128923Z" level=info msg="Container to stop \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.677296 containerd[1471]: time="2024-08-06T07:53:29.677142705Z" level=info msg="Container to stop \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 6 07:53:29.689217 systemd[1]: cri-containerd-d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e.scope: Deactivated successfully. Aug 6 07:53:29.691964 systemd[1]: cri-containerd-ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f.scope: Deactivated successfully. Aug 6 07:53:29.751655 containerd[1471]: time="2024-08-06T07:53:29.751327762Z" level=info msg="shim disconnected" id=d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e namespace=k8s.io Aug 6 07:53:29.751655 containerd[1471]: time="2024-08-06T07:53:29.751401088Z" level=warning msg="cleaning up after shim disconnected" id=d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e namespace=k8s.io Aug 6 07:53:29.751655 containerd[1471]: time="2024-08-06T07:53:29.751413199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:29.751655 containerd[1471]: time="2024-08-06T07:53:29.751647704Z" level=info msg="shim disconnected" id=ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f namespace=k8s.io Aug 6 07:53:29.751655 containerd[1471]: time="2024-08-06T07:53:29.751679698Z" level=warning msg="cleaning up after shim disconnected" id=ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f namespace=k8s.io Aug 6 07:53:29.752364 containerd[1471]: time="2024-08-06T07:53:29.751690035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:29.784649 containerd[1471]: time="2024-08-06T07:53:29.784554785Z" level=info msg="TearDown network for sandbox \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\" successfully" Aug 6 07:53:29.784649 containerd[1471]: time="2024-08-06T07:53:29.784627959Z" level=info msg="StopPodSandbox for \"d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e\" returns successfully" Aug 6 07:53:29.794769 containerd[1471]: time="2024-08-06T07:53:29.794716510Z" level=info msg="TearDown network for sandbox \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" successfully" Aug 6 07:53:29.794769 containerd[1471]: time="2024-08-06T07:53:29.794763784Z" level=info msg="StopPodSandbox for \"ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f\" returns successfully" Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965003 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-xtables-lock\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965080 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-hostproc\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965117 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-kernel\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965164 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eee4086-954f-403a-8894-47bbf74e673c-cilium-config-path\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965194 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-bpf-maps\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.966460 kubelet[2511]: I0806 07:53:29.965473 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cx8j\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-kube-api-access-9cx8j\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965508 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cni-path\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965538 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-etc-cni-netd\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965576 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-cgroup\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965652 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-net\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965684 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-run\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967019 kubelet[2511]: I0806 07:53:29.965716 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2614905-9a91-4d95-add5-b63d8bc90613-cilium-config-path\") pod \"c2614905-9a91-4d95-add5-b63d8bc90613\" (UID: \"c2614905-9a91-4d95-add5-b63d8bc90613\") " Aug 6 07:53:29.967424 kubelet[2511]: I0806 07:53:29.965750 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-hubble-tls\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967424 kubelet[2511]: I0806 07:53:29.965781 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eee4086-954f-403a-8894-47bbf74e673c-clustermesh-secrets\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.967424 kubelet[2511]: I0806 07:53:29.966578 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4r4\" (UniqueName: \"kubernetes.io/projected/c2614905-9a91-4d95-add5-b63d8bc90613-kube-api-access-6m4r4\") pod \"c2614905-9a91-4d95-add5-b63d8bc90613\" (UID: \"c2614905-9a91-4d95-add5-b63d8bc90613\") " Aug 6 07:53:29.967424 kubelet[2511]: I0806 07:53:29.966646 2511 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-lib-modules\") pod \"4eee4086-954f-403a-8894-47bbf74e673c\" (UID: \"4eee4086-954f-403a-8894-47bbf74e673c\") " Aug 6 07:53:29.974176 kubelet[2511]: I0806 07:53:29.974095 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-hostproc" (OuterVolumeSpecName: "hostproc") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979447 kubelet[2511]: I0806 07:53:29.976120 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979447 kubelet[2511]: I0806 07:53:29.976986 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979447 kubelet[2511]: I0806 07:53:29.977267 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979447 kubelet[2511]: I0806 07:53:29.977312 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979447 kubelet[2511]: I0806 07:53:29.978888 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.979936 kubelet[2511]: I0806 07:53:29.978931 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.982675 kubelet[2511]: I0806 07:53:29.982578 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eee4086-954f-403a-8894-47bbf74e673c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 6 07:53:29.983406 kubelet[2511]: I0806 07:53:29.983369 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.983974 kubelet[2511]: I0806 07:53:29.983931 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2614905-9a91-4d95-add5-b63d8bc90613-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2614905-9a91-4d95-add5-b63d8bc90613" (UID: "c2614905-9a91-4d95-add5-b63d8bc90613"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 6 07:53:29.995620 kubelet[2511]: I0806 07:53:29.995549 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cni-path" (OuterVolumeSpecName: "cni-path") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:29.996079 kubelet[2511]: I0806 07:53:29.996042 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 6 07:53:30.030077 kubelet[2511]: I0806 07:53:30.029999 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2614905-9a91-4d95-add5-b63d8bc90613-kube-api-access-6m4r4" (OuterVolumeSpecName: "kube-api-access-6m4r4") pod "c2614905-9a91-4d95-add5-b63d8bc90613" (UID: "c2614905-9a91-4d95-add5-b63d8bc90613"). InnerVolumeSpecName "kube-api-access-6m4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:53:30.032872 kubelet[2511]: I0806 07:53:30.032807 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eee4086-954f-403a-8894-47bbf74e673c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 6 07:53:30.033107 kubelet[2511]: I0806 07:53:30.032817 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:53:30.033761 kubelet[2511]: I0806 07:53:30.033713 2511 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-kube-api-access-9cx8j" (OuterVolumeSpecName: "kube-api-access-9cx8j") pod "4eee4086-954f-403a-8894-47bbf74e673c" (UID: "4eee4086-954f-403a-8894-47bbf74e673c"). InnerVolumeSpecName "kube-api-access-9cx8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 6 07:53:30.067991 kubelet[2511]: I0806 07:53:30.067903 2511 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-etc-cni-netd\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.068350 kubelet[2511]: I0806 07:53:30.068312 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-cgroup\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.068570 kubelet[2511]: I0806 07:53:30.068547 2511 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-net\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.068791 kubelet[2511]: I0806 07:53:30.068770 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cilium-run\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.068981 kubelet[2511]: I0806 07:53:30.068910 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2614905-9a91-4d95-add5-b63d8bc90613-cilium-config-path\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.068981 kubelet[2511]: I0806 07:53:30.068937 2511 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-hubble-tls\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069198 kubelet[2511]: I0806 07:53:30.069087 2511 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eee4086-954f-403a-8894-47bbf74e673c-clustermesh-secrets\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069198 kubelet[2511]: I0806 07:53:30.069114 2511 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6m4r4\" (UniqueName: \"kubernetes.io/projected/c2614905-9a91-4d95-add5-b63d8bc90613-kube-api-access-6m4r4\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069504 kubelet[2511]: I0806 07:53:30.069361 2511 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-lib-modules\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069504 kubelet[2511]: I0806 07:53:30.069394 2511 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-xtables-lock\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069504 kubelet[2511]: I0806 07:53:30.069429 2511 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-host-proc-sys-kernel\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069504 kubelet[2511]: I0806 07:53:30.069452 2511 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eee4086-954f-403a-8894-47bbf74e673c-cilium-config-path\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069504 kubelet[2511]: I0806 07:53:30.069470 2511 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-bpf-maps\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069955 kubelet[2511]: I0806 07:53:30.069647 2511 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9cx8j\" (UniqueName: \"kubernetes.io/projected/4eee4086-954f-403a-8894-47bbf74e673c-kube-api-access-9cx8j\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069955 kubelet[2511]: I0806 07:53:30.069677 2511 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-cni-path\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.069955 kubelet[2511]: I0806 07:53:30.069697 2511 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eee4086-954f-403a-8894-47bbf74e673c-hostproc\") on node \"ci-4012.1.0-0-e9cfdb5e55\" DevicePath \"\"" Aug 6 07:53:30.194194 kubelet[2511]: E0806 07:53:30.194064 2511 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 6 07:53:30.405825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d856e870a412e946fff742511e6f2c7bce154f2950dc57c31df140c7519a514e-rootfs.mount: Deactivated successfully. Aug 6 07:53:30.406019 systemd[1]: var-lib-kubelet-pods-c2614905\x2d9a91\x2d4d95\x2dadd5\x2db63d8bc90613-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6m4r4.mount: Deactivated successfully. Aug 6 07:53:30.406153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f-rootfs.mount: Deactivated successfully. Aug 6 07:53:30.406270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce3719245f1f46c707f9530b0cabe5ac35ecbeb444397e4aa9a8fb9f43d3c16f-shm.mount: Deactivated successfully. Aug 6 07:53:30.406461 systemd[1]: var-lib-kubelet-pods-4eee4086\x2d954f\x2d403a\x2d8894\x2d47bbf74e673c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cx8j.mount: Deactivated successfully. Aug 6 07:53:30.406636 systemd[1]: var-lib-kubelet-pods-4eee4086\x2d954f\x2d403a\x2d8894\x2d47bbf74e673c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 6 07:53:30.406749 systemd[1]: var-lib-kubelet-pods-4eee4086\x2d954f\x2d403a\x2d8894\x2d47bbf74e673c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 6 07:53:30.551083 kubelet[2511]: I0806 07:53:30.549520 2511 scope.go:117] "RemoveContainer" containerID="776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d" Aug 6 07:53:30.552902 containerd[1471]: time="2024-08-06T07:53:30.552825272Z" level=info msg="RemoveContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\"" Aug 6 07:53:30.570019 systemd[1]: Removed slice kubepods-burstable-pod4eee4086_954f_403a_8894_47bbf74e673c.slice - libcontainer container kubepods-burstable-pod4eee4086_954f_403a_8894_47bbf74e673c.slice. Aug 6 07:53:30.570244 systemd[1]: kubepods-burstable-pod4eee4086_954f_403a_8894_47bbf74e673c.slice: Consumed 11.042s CPU time. Aug 6 07:53:30.580002 containerd[1471]: time="2024-08-06T07:53:30.578952087Z" level=info msg="RemoveContainer for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" returns successfully" Aug 6 07:53:30.580795 kubelet[2511]: I0806 07:53:30.580660 2511 scope.go:117] "RemoveContainer" containerID="776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d" Aug 6 07:53:30.581446 containerd[1471]: time="2024-08-06T07:53:30.581343027Z" level=error msg="ContainerStatus for \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\": not found" Aug 6 07:53:30.582724 kubelet[2511]: E0806 07:53:30.582070 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\": not found" containerID="776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d" Aug 6 07:53:30.583566 systemd[1]: Removed slice kubepods-besteffort-podc2614905_9a91_4d95_add5_b63d8bc90613.slice - libcontainer container kubepods-besteffort-podc2614905_9a91_4d95_add5_b63d8bc90613.slice. Aug 6 07:53:30.594878 kubelet[2511]: I0806 07:53:30.594399 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d"} err="failed to get container status \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"776915cf02945fbeb033ee56603ab9de499601810e24a879c98c94ee18848b2d\": not found" Aug 6 07:53:30.594878 kubelet[2511]: I0806 07:53:30.594482 2511 scope.go:117] "RemoveContainer" containerID="4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc" Aug 6 07:53:30.600641 containerd[1471]: time="2024-08-06T07:53:30.600556070Z" level=info msg="RemoveContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\"" Aug 6 07:53:30.615337 containerd[1471]: time="2024-08-06T07:53:30.615276264Z" level=info msg="RemoveContainer for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" returns successfully" Aug 6 07:53:30.620630 kubelet[2511]: I0806 07:53:30.620504 2511 scope.go:117] "RemoveContainer" containerID="6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b" Aug 6 07:53:30.621928 containerd[1471]: time="2024-08-06T07:53:30.621884188Z" level=info msg="RemoveContainer for \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\"" Aug 6 07:53:30.635678 containerd[1471]: time="2024-08-06T07:53:30.635550382Z" level=info msg="RemoveContainer for \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\" returns successfully" Aug 6 07:53:30.637482 kubelet[2511]: I0806 07:53:30.637421 2511 scope.go:117] "RemoveContainer" containerID="64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812" Aug 6 07:53:30.643431 containerd[1471]: time="2024-08-06T07:53:30.643227525Z" level=info msg="RemoveContainer for \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\"" Aug 6 07:53:30.654510 containerd[1471]: time="2024-08-06T07:53:30.654431381Z" level=info msg="RemoveContainer for \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\" returns successfully" Aug 6 07:53:30.655237 kubelet[2511]: I0806 07:53:30.654825 2511 scope.go:117] "RemoveContainer" containerID="7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48" Aug 6 07:53:30.660345 containerd[1471]: time="2024-08-06T07:53:30.659616406Z" level=info msg="RemoveContainer for \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\"" Aug 6 07:53:30.671808 containerd[1471]: time="2024-08-06T07:53:30.671729875Z" level=info msg="RemoveContainer for \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\" returns successfully" Aug 6 07:53:30.672460 kubelet[2511]: I0806 07:53:30.672415 2511 scope.go:117] "RemoveContainer" containerID="6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4" Aug 6 07:53:30.674100 containerd[1471]: time="2024-08-06T07:53:30.674058697Z" level=info msg="RemoveContainer for \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\"" Aug 6 07:53:30.684518 containerd[1471]: time="2024-08-06T07:53:30.684431952Z" level=info msg="RemoveContainer for \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\" returns successfully" Aug 6 07:53:30.685065 kubelet[2511]: I0806 07:53:30.685009 2511 scope.go:117] "RemoveContainer" containerID="4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc" Aug 6 07:53:30.685483 containerd[1471]: time="2024-08-06T07:53:30.685398230Z" level=error msg="ContainerStatus for \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\": not found" Aug 6 07:53:30.686024 kubelet[2511]: E0806 07:53:30.685740 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\": not found" containerID="4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc" Aug 6 07:53:30.686024 kubelet[2511]: I0806 07:53:30.685802 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc"} err="failed to get container status \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a326d7e5c490b428ea4f34bb631994018d5d4814e928520747c7ff379c041cc\": not found" Aug 6 07:53:30.686024 kubelet[2511]: I0806 07:53:30.685823 2511 scope.go:117] "RemoveContainer" containerID="6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b" Aug 6 07:53:30.686662 containerd[1471]: time="2024-08-06T07:53:30.686493475Z" level=error msg="ContainerStatus for \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\": not found" Aug 6 07:53:30.686863 kubelet[2511]: E0806 07:53:30.686834 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\": not found" containerID="6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b" Aug 6 07:53:30.686936 kubelet[2511]: I0806 07:53:30.686894 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b"} err="failed to get container status \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6182b51fd632df646ddc8487388c570a1345f110b86a9d7f0a50b7cae30cbd2b\": not found" Aug 6 07:53:30.686936 kubelet[2511]: I0806 07:53:30.686911 2511 scope.go:117] "RemoveContainer" containerID="64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812" Aug 6 07:53:30.687218 containerd[1471]: time="2024-08-06T07:53:30.687168083Z" level=error msg="ContainerStatus for \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\": not found" Aug 6 07:53:30.687347 kubelet[2511]: E0806 07:53:30.687328 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\": not found" containerID="64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812" Aug 6 07:53:30.687427 kubelet[2511]: I0806 07:53:30.687368 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812"} err="failed to get container status \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\": rpc error: code = NotFound desc = an error occurred when try to find container \"64d06cd38c4c9d48e65717be27608d4343845767879d9cce4699c6a588dff812\": not found" Aug 6 07:53:30.687427 kubelet[2511]: I0806 07:53:30.687384 2511 scope.go:117] "RemoveContainer" containerID="7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48" Aug 6 07:53:30.687611 containerd[1471]: time="2024-08-06T07:53:30.687554173Z" level=error msg="ContainerStatus for \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\": not found" Aug 6 07:53:30.687778 kubelet[2511]: E0806 07:53:30.687761 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\": not found" containerID="7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48" Aug 6 07:53:30.688022 kubelet[2511]: I0806 07:53:30.688005 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48"} err="failed to get container status \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ccc329ba746a2278dea63cde37a19a21c51f16117ae4a0f56ca03fc83751a48\": not found" Aug 6 07:53:30.688100 kubelet[2511]: I0806 07:53:30.688028 2511 scope.go:117] "RemoveContainer" containerID="6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4" Aug 6 07:53:30.688328 containerd[1471]: time="2024-08-06T07:53:30.688284771Z" level=error msg="ContainerStatus for \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\": not found" Aug 6 07:53:30.688489 kubelet[2511]: E0806 07:53:30.688474 2511 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\": not found" containerID="6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4" Aug 6 07:53:30.688572 kubelet[2511]: I0806 07:53:30.688507 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4"} err="failed to get container status \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6829854fa308d50aef08a11fb385a3306296049ef9a5cdd708a1cbeaf02a37f4\": not found" Aug 6 07:53:31.333748 sshd[4156]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:31.343903 systemd[1]: sshd@25-64.23.226.177:22-139.178.89.65:41990.service: Deactivated successfully. Aug 6 07:53:31.348506 systemd[1]: session-26.scope: Deactivated successfully. Aug 6 07:53:31.351875 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Aug 6 07:53:31.369391 systemd[1]: Started sshd@26-64.23.226.177:22-139.178.89.65:36576.service - OpenSSH per-connection server daemon (139.178.89.65:36576). Aug 6 07:53:31.373458 systemd-logind[1450]: Removed session 26. Aug 6 07:53:31.448538 sshd[4313]: Accepted publickey for core from 139.178.89.65 port 36576 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:31.452339 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:31.462285 systemd-logind[1450]: New session 27 of user core. Aug 6 07:53:31.471022 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 6 07:53:31.975668 kubelet[2511]: I0806 07:53:31.974206 2511 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4eee4086-954f-403a-8894-47bbf74e673c" path="/var/lib/kubelet/pods/4eee4086-954f-403a-8894-47bbf74e673c/volumes" Aug 6 07:53:31.975668 kubelet[2511]: I0806 07:53:31.975097 2511 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c2614905-9a91-4d95-add5-b63d8bc90613" path="/var/lib/kubelet/pods/c2614905-9a91-4d95-add5-b63d8bc90613/volumes" Aug 6 07:53:32.452380 sshd[4313]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:32.463876 systemd[1]: sshd@26-64.23.226.177:22-139.178.89.65:36576.service: Deactivated successfully. Aug 6 07:53:32.467110 systemd[1]: session-27.scope: Deactivated successfully. Aug 6 07:53:32.471231 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Aug 6 07:53:32.489819 systemd[1]: Started sshd@27-64.23.226.177:22-139.178.89.65:36582.service - OpenSSH per-connection server daemon (139.178.89.65:36582). Aug 6 07:53:32.497682 systemd-logind[1450]: Removed session 27. Aug 6 07:53:32.533658 kubelet[2511]: I0806 07:53:32.533351 2511 topology_manager.go:215] "Topology Admit Handler" podUID="eb179864-d30b-4616-b2f0-010e2da02058" podNamespace="kube-system" podName="cilium-77xfr" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534432 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="apply-sysctl-overwrites" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534473 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="clean-cilium-state" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534497 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="mount-cgroup" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534511 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="mount-bpf-fs" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534529 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2614905-9a91-4d95-add5-b63d8bc90613" containerName="cilium-operator" Aug 6 07:53:32.534611 kubelet[2511]: E0806 07:53:32.534539 2511 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="cilium-agent" Aug 6 07:53:32.538627 kubelet[2511]: I0806 07:53:32.534585 2511 memory_manager.go:346] "RemoveStaleState removing state" podUID="4eee4086-954f-403a-8894-47bbf74e673c" containerName="cilium-agent" Aug 6 07:53:32.538627 kubelet[2511]: I0806 07:53:32.535151 2511 memory_manager.go:346] "RemoveStaleState removing state" podUID="c2614905-9a91-4d95-add5-b63d8bc90613" containerName="cilium-operator" Aug 6 07:53:32.571965 systemd[1]: Created slice kubepods-burstable-podeb179864_d30b_4616_b2f0_010e2da02058.slice - libcontainer container kubepods-burstable-podeb179864_d30b_4616_b2f0_010e2da02058.slice. Aug 6 07:53:32.588233 sshd[4324]: Accepted publickey for core from 139.178.89.65 port 36582 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:32.591930 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:32.609395 systemd-logind[1450]: New session 28 of user core. Aug 6 07:53:32.615138 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686689 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-host-proc-sys-net\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686756 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-bpf-maps\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686790 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-lib-modules\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686825 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eb179864-d30b-4616-b2f0-010e2da02058-cilium-ipsec-secrets\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686857 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-xtables-lock\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688559 kubelet[2511]: I0806 07:53:32.686890 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb179864-d30b-4616-b2f0-010e2da02058-clustermesh-secrets\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.688368 sshd[4324]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687030 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-hostproc\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687074 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-cilium-cgroup\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687109 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb179864-d30b-4616-b2f0-010e2da02058-cilium-config-path\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687139 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-cilium-run\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687169 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-cni-path\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.689212 kubelet[2511]: I0806 07:53:32.687250 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkl4\" (UniqueName: \"kubernetes.io/projected/eb179864-d30b-4616-b2f0-010e2da02058-kube-api-access-pjkl4\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.690803 kubelet[2511]: I0806 07:53:32.687301 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-etc-cni-netd\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.690803 kubelet[2511]: I0806 07:53:32.687335 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb179864-d30b-4616-b2f0-010e2da02058-hubble-tls\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.690803 kubelet[2511]: I0806 07:53:32.687364 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb179864-d30b-4616-b2f0-010e2da02058-host-proc-sys-kernel\") pod \"cilium-77xfr\" (UID: \"eb179864-d30b-4616-b2f0-010e2da02058\") " pod="kube-system/cilium-77xfr" Aug 6 07:53:32.699892 systemd[1]: sshd@27-64.23.226.177:22-139.178.89.65:36582.service: Deactivated successfully. Aug 6 07:53:32.704345 systemd[1]: session-28.scope: Deactivated successfully. Aug 6 07:53:32.708859 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Aug 6 07:53:32.716422 systemd[1]: Started sshd@28-64.23.226.177:22-139.178.89.65:36598.service - OpenSSH per-connection server daemon (139.178.89.65:36598). Aug 6 07:53:32.719153 systemd-logind[1450]: Removed session 28. Aug 6 07:53:32.779728 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 36598 ssh2: RSA SHA256:dce1zMFfYq90Y5OOIdZRSBiKLmh3HOOV8AZK432nffA Aug 6 07:53:32.781817 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 07:53:32.795074 systemd-logind[1450]: New session 29 of user core. Aug 6 07:53:32.798965 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 6 07:53:32.883100 kubelet[2511]: E0806 07:53:32.880661 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:32.884135 containerd[1471]: time="2024-08-06T07:53:32.883579272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-77xfr,Uid:eb179864-d30b-4616-b2f0-010e2da02058,Namespace:kube-system,Attempt:0,}" Aug 6 07:53:32.950927 containerd[1471]: time="2024-08-06T07:53:32.950754772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 07:53:32.951441 containerd[1471]: time="2024-08-06T07:53:32.951134073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:32.951659 containerd[1471]: time="2024-08-06T07:53:32.951549458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 07:53:32.952237 containerd[1471]: time="2024-08-06T07:53:32.952092310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 07:53:33.009924 systemd[1]: Started cri-containerd-cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f.scope - libcontainer container cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f. Aug 6 07:53:33.012664 kubelet[2511]: I0806 07:53:33.012511 2511 setters.go:552] "Node became not ready" node="ci-4012.1.0-0-e9cfdb5e55" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-06T07:53:33Z","lastTransitionTime":"2024-08-06T07:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 6 07:53:33.119329 containerd[1471]: time="2024-08-06T07:53:33.118163010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-77xfr,Uid:eb179864-d30b-4616-b2f0-010e2da02058,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\"" Aug 6 07:53:33.124366 kubelet[2511]: E0806 07:53:33.121458 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:33.129108 containerd[1471]: time="2024-08-06T07:53:33.128387709Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 6 07:53:33.167010 containerd[1471]: time="2024-08-06T07:53:33.166822663Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808\"" Aug 6 07:53:33.168668 containerd[1471]: time="2024-08-06T07:53:33.168150590Z" level=info msg="StartContainer for \"3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808\"" Aug 6 07:53:33.216967 systemd[1]: Started cri-containerd-3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808.scope - libcontainer container 3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808. Aug 6 07:53:33.267408 containerd[1471]: time="2024-08-06T07:53:33.265758138Z" level=info msg="StartContainer for \"3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808\" returns successfully" Aug 6 07:53:33.297467 systemd[1]: cri-containerd-3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808.scope: Deactivated successfully. Aug 6 07:53:33.358760 containerd[1471]: time="2024-08-06T07:53:33.358675076Z" level=info msg="shim disconnected" id=3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808 namespace=k8s.io Aug 6 07:53:33.358760 containerd[1471]: time="2024-08-06T07:53:33.358746547Z" level=warning msg="cleaning up after shim disconnected" id=3b7f3addb04fa285689e2b618c156aa34a69f7125fd1743d8cbc1d3a660de808 namespace=k8s.io Aug 6 07:53:33.358760 containerd[1471]: time="2024-08-06T07:53:33.358758420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:33.580236 kubelet[2511]: E0806 07:53:33.580195 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:33.584869 containerd[1471]: time="2024-08-06T07:53:33.584618594Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 6 07:53:33.616829 containerd[1471]: time="2024-08-06T07:53:33.616740984Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036\"" Aug 6 07:53:33.617892 containerd[1471]: time="2024-08-06T07:53:33.617823401Z" level=info msg="StartContainer for \"3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036\"" Aug 6 07:53:33.658961 systemd[1]: Started cri-containerd-3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036.scope - libcontainer container 3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036. Aug 6 07:53:33.708408 containerd[1471]: time="2024-08-06T07:53:33.708028665Z" level=info msg="StartContainer for \"3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036\" returns successfully" Aug 6 07:53:33.720170 systemd[1]: cri-containerd-3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036.scope: Deactivated successfully. Aug 6 07:53:33.763630 containerd[1471]: time="2024-08-06T07:53:33.763224717Z" level=info msg="shim disconnected" id=3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036 namespace=k8s.io Aug 6 07:53:33.763630 containerd[1471]: time="2024-08-06T07:53:33.763300010Z" level=warning msg="cleaning up after shim disconnected" id=3694097a0a0862aedc8191c04a8ba026cb5c0a7f347f3fe3402d0c457ebbe036 namespace=k8s.io Aug 6 07:53:33.763630 containerd[1471]: time="2024-08-06T07:53:33.763325361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:34.585197 kubelet[2511]: E0806 07:53:34.585141 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:34.594626 containerd[1471]: time="2024-08-06T07:53:34.593271733Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 6 07:53:34.637214 containerd[1471]: time="2024-08-06T07:53:34.637152529Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6\"" Aug 6 07:53:34.638748 containerd[1471]: time="2024-08-06T07:53:34.638700278Z" level=info msg="StartContainer for \"778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6\"" Aug 6 07:53:34.704119 systemd[1]: Started cri-containerd-778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6.scope - libcontainer container 778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6. Aug 6 07:53:34.759079 containerd[1471]: time="2024-08-06T07:53:34.759018180Z" level=info msg="StartContainer for \"778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6\" returns successfully" Aug 6 07:53:34.768938 systemd[1]: cri-containerd-778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6.scope: Deactivated successfully. Aug 6 07:53:34.815792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6-rootfs.mount: Deactivated successfully. Aug 6 07:53:34.828338 containerd[1471]: time="2024-08-06T07:53:34.828246190Z" level=info msg="shim disconnected" id=778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6 namespace=k8s.io Aug 6 07:53:34.828338 containerd[1471]: time="2024-08-06T07:53:34.828330301Z" level=warning msg="cleaning up after shim disconnected" id=778836a18317e57d4b275778a436edb65e3cca114ea0b952b4c86f7a394633b6 namespace=k8s.io Aug 6 07:53:34.828338 containerd[1471]: time="2024-08-06T07:53:34.828342914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:35.196200 kubelet[2511]: E0806 07:53:35.196147 2511 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 6 07:53:35.592774 kubelet[2511]: E0806 07:53:35.592735 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:35.598624 containerd[1471]: time="2024-08-06T07:53:35.597288075Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 6 07:53:35.629556 containerd[1471]: time="2024-08-06T07:53:35.629496151Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0\"" Aug 6 07:53:35.633886 containerd[1471]: time="2024-08-06T07:53:35.631090166Z" level=info msg="StartContainer for \"17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0\"" Aug 6 07:53:35.699241 systemd[1]: Started cri-containerd-17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0.scope - libcontainer container 17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0. Aug 6 07:53:35.752569 systemd[1]: cri-containerd-17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0.scope: Deactivated successfully. Aug 6 07:53:35.760809 containerd[1471]: time="2024-08-06T07:53:35.760740790Z" level=info msg="StartContainer for \"17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0\" returns successfully" Aug 6 07:53:35.769508 containerd[1471]: time="2024-08-06T07:53:35.755225605Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb179864_d30b_4616_b2f0_010e2da02058.slice/cri-containerd-17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0.scope/memory.events\": no such file or directory" Aug 6 07:53:35.830953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0-rootfs.mount: Deactivated successfully. Aug 6 07:53:35.836861 containerd[1471]: time="2024-08-06T07:53:35.836766455Z" level=info msg="shim disconnected" id=17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0 namespace=k8s.io Aug 6 07:53:35.836861 containerd[1471]: time="2024-08-06T07:53:35.836856861Z" level=warning msg="cleaning up after shim disconnected" id=17391936e2be901f834eabf726263fe13c20efb08fe28fc08314ea69a68eacc0 namespace=k8s.io Aug 6 07:53:35.837191 containerd[1471]: time="2024-08-06T07:53:35.836876077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 07:53:36.598548 kubelet[2511]: E0806 07:53:36.598480 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:36.605938 containerd[1471]: time="2024-08-06T07:53:36.605843323Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 6 07:53:36.652928 containerd[1471]: time="2024-08-06T07:53:36.652638781Z" level=info msg="CreateContainer within sandbox \"cf0a54a76fc07812f448606572b13eb97e56e3d85cc13b02a94262725525916f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5716069144cba4f5fea7e47129d2764e17fb7070406a81e0ea35e4d5fb281a05\"" Aug 6 07:53:36.655504 containerd[1471]: time="2024-08-06T07:53:36.653898380Z" level=info msg="StartContainer for \"5716069144cba4f5fea7e47129d2764e17fb7070406a81e0ea35e4d5fb281a05\"" Aug 6 07:53:36.657347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440300410.mount: Deactivated successfully. Aug 6 07:53:36.705137 systemd[1]: Started cri-containerd-5716069144cba4f5fea7e47129d2764e17fb7070406a81e0ea35e4d5fb281a05.scope - libcontainer container 5716069144cba4f5fea7e47129d2764e17fb7070406a81e0ea35e4d5fb281a05. Aug 6 07:53:36.756673 containerd[1471]: time="2024-08-06T07:53:36.756615107Z" level=info msg="StartContainer for \"5716069144cba4f5fea7e47129d2764e17fb7070406a81e0ea35e4d5fb281a05\" returns successfully" Aug 6 07:53:37.549708 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 6 07:53:37.608291 kubelet[2511]: E0806 07:53:37.608241 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:37.635851 kubelet[2511]: I0806 07:53:37.635793 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-77xfr" podStartSLOduration=5.635730916 podCreationTimestamp="2024-08-06 07:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:53:37.6337782 +0000 UTC m=+147.871874062" watchObservedRunningTime="2024-08-06 07:53:37.635730916 +0000 UTC m=+147.873826781" Aug 6 07:53:38.884110 kubelet[2511]: E0806 07:53:38.883932 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:38.970673 kubelet[2511]: E0806 07:53:38.970151 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:41.505890 systemd-networkd[1377]: lxc_health: Link UP Aug 6 07:53:41.527429 systemd-networkd[1377]: lxc_health: Gained carrier Aug 6 07:53:42.884054 kubelet[2511]: E0806 07:53:42.884015 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:43.449802 systemd-networkd[1377]: lxc_health: Gained IPv6LL Aug 6 07:53:43.624669 kubelet[2511]: E0806 07:53:43.624477 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:44.626771 kubelet[2511]: E0806 07:53:44.626720 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:46.971234 kubelet[2511]: E0806 07:53:46.971170 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 6 07:53:48.704449 sshd[4332]: pam_unix(sshd:session): session closed for user core Aug 6 07:53:48.713711 systemd[1]: sshd@28-64.23.226.177:22-139.178.89.65:36598.service: Deactivated successfully. Aug 6 07:53:48.718216 systemd[1]: session-29.scope: Deactivated successfully. Aug 6 07:53:48.724273 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. Aug 6 07:53:48.726739 systemd-logind[1450]: Removed session 29.