Feb 12 19:44:52.005992 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 12 19:44:52.006030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:44:52.006049 kernel: BIOS-provided physical RAM map: Feb 12 19:44:52.006061 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 19:44:52.006072 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 19:44:52.006084 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 19:44:52.006098 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 12 19:44:52.006110 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 12 19:44:52.006125 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 19:44:52.006137 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 19:44:52.006149 kernel: NX (Execute Disable) protection: active Feb 12 19:44:52.006159 kernel: SMBIOS 2.8 present. Feb 12 19:44:52.006169 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 12 19:44:52.006179 kernel: Hypervisor detected: KVM Feb 12 19:44:52.006191 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:44:52.006208 kernel: kvm-clock: cpu 0, msr 47faa001, primary cpu clock Feb 12 19:44:52.006221 kernel: kvm-clock: using sched offset of 4158362583 cycles Feb 12 19:44:52.006235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:44:52.006248 kernel: tsc: Detected 2494.140 MHz processor Feb 12 19:44:52.006262 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:44:52.006276 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:44:52.006289 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 12 19:44:52.006302 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:44:52.006318 kernel: ACPI: Early table checksum verification disabled Feb 12 19:44:52.006335 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 12 19:44:52.006349 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006422 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006436 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006449 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 19:44:52.006463 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006476 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006488 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006505 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:44:52.006518 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 12 19:44:52.006531 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 12 19:44:52.006544 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 19:44:52.006558 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 12 19:44:52.006571 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 12 19:44:52.006584 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 12 19:44:52.006598 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 12 19:44:52.006621 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:44:52.006636 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:44:52.006649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 19:44:52.006661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 19:44:52.006674 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 12 19:44:52.006689 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 12 19:44:52.006707 kernel: Zone ranges: Feb 12 19:44:52.006721 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:44:52.006735 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 12 19:44:52.006750 kernel: Normal empty Feb 12 19:44:52.006765 kernel: Movable zone start for each node Feb 12 19:44:52.007929 kernel: Early memory node ranges Feb 12 19:44:52.007967 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 19:44:52.007982 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 12 19:44:52.007996 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 12 19:44:52.008020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:44:52.008035 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 19:44:52.008049 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 12 19:44:52.008064 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 19:44:52.008076 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:44:52.008088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:44:52.008100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:44:52.008112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:44:52.008126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:44:52.008144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:44:52.008159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:44:52.008173 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:44:52.008187 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:44:52.008200 kernel: TSC deadline timer available Feb 12 19:44:52.008213 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:44:52.008225 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 19:44:52.008501 kernel: Booting paravirtualized kernel on KVM Feb 12 19:44:52.008513 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:44:52.008532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:44:52.008543 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:44:52.008555 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:44:52.008568 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:44:52.008580 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 19:44:52.008592 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 19:44:52.008604 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 12 19:44:52.008615 kernel: Policy zone: DMA32 Feb 12 19:44:52.008629 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:44:52.008645 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:44:52.008657 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:44:52.008670 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:44:52.008683 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:44:52.008697 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 19:44:52.008710 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:44:52.008723 kernel: Kernel/User page tables isolation: enabled Feb 12 19:44:52.008737 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:44:52.008754 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:44:52.008767 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:44:52.009821 kernel: rcu: RCU event tracing is enabled. Feb 12 19:44:52.009846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:44:52.009856 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:44:52.009864 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:44:52.009882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:44:52.009891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:44:52.009921 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 19:44:52.009935 kernel: random: crng init done Feb 12 19:44:52.009944 kernel: Console: colour VGA+ 80x25 Feb 12 19:44:52.009954 kernel: printk: console [tty0] enabled Feb 12 19:44:52.009963 kernel: printk: console [ttyS0] enabled Feb 12 19:44:52.009971 kernel: ACPI: Core revision 20210730 Feb 12 19:44:52.009979 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:44:52.010001 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:44:52.010010 kernel: x2apic enabled Feb 12 19:44:52.010018 kernel: Switched APIC routing to physical x2apic. Feb 12 19:44:52.010029 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:44:52.010038 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 12 19:44:52.010050 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Feb 12 19:44:52.010062 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 19:44:52.010073 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 19:44:52.010085 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:44:52.010098 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:44:52.010110 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:44:52.010119 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:44:52.010130 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 12 19:44:52.010147 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:44:52.010156 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:44:52.010167 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 19:44:52.010176 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:44:52.010184 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:44:52.010193 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:44:52.010206 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:44:52.010219 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:44:52.010230 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:44:52.010241 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:44:52.010249 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:44:52.010258 kernel: LSM: Security Framework initializing Feb 12 19:44:52.010266 kernel: SELinux: Initializing. Feb 12 19:44:52.010275 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:44:52.010283 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:44:52.010292 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 12 19:44:52.010303 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 19:44:52.010312 kernel: signal: max sigframe size: 1776 Feb 12 19:44:52.010320 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:44:52.010329 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:44:52.010337 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:44:52.010346 kernel: x86: Booting SMP configuration: Feb 12 19:44:52.010354 kernel: .... node #0, CPUs: #1 Feb 12 19:44:52.010366 kernel: kvm-clock: cpu 1, msr 47faa041, secondary cpu clock Feb 12 19:44:52.010378 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 19:44:52.010389 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:44:52.010398 kernel: smpboot: Max logical packages: 1 Feb 12 19:44:52.010407 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Feb 12 19:44:52.010416 kernel: devtmpfs: initialized Feb 12 19:44:52.010424 kernel: x86/mm: Memory block size: 128MB Feb 12 19:44:52.010432 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:44:52.010441 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:44:52.010450 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:44:52.010458 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:44:52.010469 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:44:52.010478 kernel: audit: type=2000 audit(1707767091.445:1): state=initialized audit_enabled=0 res=1 Feb 12 19:44:52.010489 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:44:52.010500 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:44:52.010509 kernel: cpuidle: using governor menu Feb 12 19:44:52.010521 kernel: ACPI: bus type PCI registered Feb 12 19:44:52.010530 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:44:52.010538 kernel: dca service started, version 1.12.1 Feb 12 19:44:52.010547 kernel: PCI: Using configuration type 1 for base access Feb 12 19:44:52.010559 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:44:52.010568 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:44:52.010577 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:44:52.010603 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:44:52.010615 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:44:52.010628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:44:52.010639 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:44:52.010648 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:44:52.010656 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:44:52.010668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:44:52.010677 kernel: ACPI: Interpreter enabled Feb 12 19:44:52.010685 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:44:52.010694 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:44:52.010703 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:44:52.010711 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:44:52.010720 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:44:52.013041 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:44:52.013178 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 19:44:52.013192 kernel: acpiphp: Slot [3] registered Feb 12 19:44:52.013202 kernel: acpiphp: Slot [4] registered Feb 12 19:44:52.013210 kernel: acpiphp: Slot [5] registered Feb 12 19:44:52.013220 kernel: acpiphp: Slot [6] registered Feb 12 19:44:52.013231 kernel: acpiphp: Slot [7] registered Feb 12 19:44:52.013239 kernel: acpiphp: Slot [8] registered Feb 12 19:44:52.013247 kernel: acpiphp: Slot [9] registered Feb 12 19:44:52.013259 kernel: acpiphp: Slot [10] registered Feb 12 19:44:52.013276 kernel: acpiphp: Slot [11] registered Feb 12 19:44:52.013285 kernel: acpiphp: Slot [12] registered Feb 12 19:44:52.013293 kernel: acpiphp: Slot [13] registered Feb 12 19:44:52.013302 kernel: acpiphp: Slot [14] registered Feb 12 19:44:52.013310 kernel: acpiphp: Slot [15] registered Feb 12 19:44:52.013318 kernel: acpiphp: Slot [16] registered Feb 12 19:44:52.013327 kernel: acpiphp: Slot [17] registered Feb 12 19:44:52.013335 kernel: acpiphp: Slot [18] registered Feb 12 19:44:52.013347 kernel: acpiphp: Slot [19] registered Feb 12 19:44:52.013359 kernel: acpiphp: Slot [20] registered Feb 12 19:44:52.013370 kernel: acpiphp: Slot [21] registered Feb 12 19:44:52.013382 kernel: acpiphp: Slot [22] registered Feb 12 19:44:52.013394 kernel: acpiphp: Slot [23] registered Feb 12 19:44:52.013410 kernel: acpiphp: Slot [24] registered Feb 12 19:44:52.013418 kernel: acpiphp: Slot [25] registered Feb 12 19:44:52.013427 kernel: acpiphp: Slot [26] registered Feb 12 19:44:52.013435 kernel: acpiphp: Slot [27] registered Feb 12 19:44:52.013445 kernel: acpiphp: Slot [28] registered Feb 12 19:44:52.013459 kernel: acpiphp: Slot [29] registered Feb 12 19:44:52.013467 kernel: acpiphp: Slot [30] registered Feb 12 19:44:52.013476 kernel: acpiphp: Slot [31] registered Feb 12 19:44:52.013487 kernel: PCI host bridge to bus 0000:00 Feb 12 19:44:52.013615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:44:52.013706 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:44:52.013812 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:44:52.013958 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 19:44:52.014052 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 19:44:52.014131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:44:52.014262 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:44:52.014380 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:44:52.014536 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:44:52.014701 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 12 19:44:52.014897 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:44:52.015047 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:44:52.015197 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:44:52.015355 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:44:52.015474 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 12 19:44:52.015580 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 12 19:44:52.015699 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:44:52.015826 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 19:44:52.015920 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 19:44:52.016034 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 19:44:52.016142 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 19:44:52.016262 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 19:44:52.016412 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 12 19:44:52.016529 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 19:44:52.016619 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:44:52.016726 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:44:52.022959 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 12 19:44:52.023089 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 12 19:44:52.023183 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 19:44:52.023350 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:44:52.023538 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 12 19:44:52.023676 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 12 19:44:52.023856 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 19:44:52.024050 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 12 19:44:52.024198 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 12 19:44:52.024353 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 12 19:44:52.024494 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 19:44:52.024676 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:44:52.024841 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:44:52.024977 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 12 19:44:52.025131 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 19:44:52.025256 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:44:52.025358 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 12 19:44:52.025465 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 12 19:44:52.025617 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 12 19:44:52.025764 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 19:44:52.025925 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 12 19:44:52.026041 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 12 19:44:52.026053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:44:52.026062 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:44:52.026071 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:44:52.026092 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:44:52.026101 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:44:52.026110 kernel: iommu: Default domain type: Translated Feb 12 19:44:52.026118 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:44:52.026210 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:44:52.026298 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:44:52.026384 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:44:52.026395 kernel: vgaarb: loaded Feb 12 19:44:52.026410 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:44:52.026419 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:44:52.026428 kernel: PTP clock support registered Feb 12 19:44:52.026437 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:44:52.026445 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:44:52.026454 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 19:44:52.026462 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 12 19:44:52.026470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:44:52.026479 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:44:52.026494 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:44:52.026504 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:44:52.026514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:44:52.026522 kernel: pnp: PnP ACPI init Feb 12 19:44:52.026530 kernel: pnp: PnP ACPI: found 4 devices Feb 12 19:44:52.026540 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:44:52.026548 kernel: NET: Registered PF_INET protocol family Feb 12 19:44:52.026557 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:44:52.026566 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 19:44:52.026580 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:44:52.026589 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:44:52.026597 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 19:44:52.026606 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 19:44:52.026614 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:44:52.026623 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:44:52.026631 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:44:52.026640 kernel: NET: Registered PF_XDP protocol family Feb 12 19:44:52.026728 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:44:52.026858 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:44:52.026942 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:44:52.027099 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 19:44:52.027187 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 19:44:52.027283 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:44:52.027417 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:44:52.027534 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:44:52.027551 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 19:44:52.027830 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 35187 usecs Feb 12 19:44:52.027854 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:44:52.027866 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:44:52.027881 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 12 19:44:52.027896 kernel: Initialise system trusted keyrings Feb 12 19:44:52.027908 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 19:44:52.027916 kernel: Key type asymmetric registered Feb 12 19:44:52.027925 kernel: Asymmetric key parser 'x509' registered Feb 12 19:44:52.027944 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:44:52.027953 kernel: io scheduler mq-deadline registered Feb 12 19:44:52.027962 kernel: io scheduler kyber registered Feb 12 19:44:52.027970 kernel: io scheduler bfq registered Feb 12 19:44:52.027979 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:44:52.027988 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 19:44:52.027996 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:44:52.028005 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:44:52.028014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:44:52.028023 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:44:52.028036 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:44:52.028045 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:44:52.028054 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:44:52.028179 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 12 19:44:52.028192 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:44:52.028319 kernel: rtc_cmos 00:03: registered as rtc0 Feb 12 19:44:52.028445 kernel: rtc_cmos 00:03: setting system clock to 2024-02-12T19:44:51 UTC (1707767091) Feb 12 19:44:52.028544 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 12 19:44:52.028555 kernel: intel_pstate: CPU model not supported Feb 12 19:44:52.028564 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:44:52.028573 kernel: Segment Routing with IPv6 Feb 12 19:44:52.028582 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:44:52.028648 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:44:52.028656 kernel: Key type dns_resolver registered Feb 12 19:44:52.028665 kernel: IPI shorthand broadcast: enabled Feb 12 19:44:52.028674 kernel: sched_clock: Marking stable (743308479, 96146107)->(953380135, -113925549) Feb 12 19:44:52.028692 kernel: registered taskstats version 1 Feb 12 19:44:52.028701 kernel: Loading compiled-in X.509 certificates Feb 12 19:44:52.028709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 12 19:44:52.028718 kernel: Key type .fscrypt registered Feb 12 19:44:52.028727 kernel: Key type fscrypt-provisioning registered Feb 12 19:44:52.028736 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:44:52.028745 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:44:52.028753 kernel: ima: No architecture policies found Feb 12 19:44:52.028762 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:44:52.028776 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:44:52.028818 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:44:52.028827 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:44:52.028836 kernel: Run /init as init process Feb 12 19:44:52.028844 kernel: with arguments: Feb 12 19:44:52.028854 kernel: /init Feb 12 19:44:52.028900 kernel: with environment: Feb 12 19:44:52.028914 kernel: HOME=/ Feb 12 19:44:52.028926 kernel: TERM=linux Feb 12 19:44:52.028945 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:44:52.028964 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:44:52.028981 systemd[1]: Detected virtualization kvm. Feb 12 19:44:52.028996 systemd[1]: Detected architecture x86-64. Feb 12 19:44:52.029007 systemd[1]: Running in initrd. Feb 12 19:44:52.029017 systemd[1]: No hostname configured, using default hostname. Feb 12 19:44:52.029026 systemd[1]: Hostname set to . Feb 12 19:44:52.029047 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:44:52.029056 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:44:52.029065 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:44:52.029075 systemd[1]: Reached target cryptsetup.target. Feb 12 19:44:52.029084 systemd[1]: Reached target paths.target. Feb 12 19:44:52.029093 systemd[1]: Reached target slices.target. Feb 12 19:44:52.029102 systemd[1]: Reached target swap.target. Feb 12 19:44:52.029111 systemd[1]: Reached target timers.target. Feb 12 19:44:52.029127 systemd[1]: Listening on iscsid.socket. Feb 12 19:44:52.029137 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:44:52.029146 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:44:52.029161 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:44:52.029170 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:44:52.029179 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:44:52.029189 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:44:52.029198 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:44:52.029212 systemd[1]: Reached target sockets.target. Feb 12 19:44:52.029225 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:44:52.029238 systemd[1]: Finished network-cleanup.service. Feb 12 19:44:52.029267 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:44:52.029277 systemd[1]: Starting systemd-journald.service... Feb 12 19:44:52.029290 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:44:52.029305 systemd[1]: Starting systemd-resolved.service... Feb 12 19:44:52.029315 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:44:52.029324 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:44:52.029334 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:44:52.029356 systemd-journald[183]: Journal started Feb 12 19:44:52.029434 systemd-journald[183]: Runtime Journal (/run/log/journal/85aaea18ba2544f79d384ce5be4cc7d3) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:44:52.003368 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:44:52.086333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:44:52.086361 kernel: Bridge firewalling registered Feb 12 19:44:52.086374 kernel: SCSI subsystem initialized Feb 12 19:44:52.086385 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:44:52.086417 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:44:52.048680 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:44:52.091431 systemd[1]: Started systemd-journald.service. Feb 12 19:44:52.091462 kernel: audit: type=1130 audit(1707767092.085:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.091483 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:44:52.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.074754 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:44:52.074766 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:44:52.074834 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:44:52.098265 kernel: audit: type=1130 audit(1707767092.093:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.079035 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:44:52.094380 systemd[1]: Started systemd-resolved.service. Feb 12 19:44:52.105548 kernel: audit: type=1130 audit(1707767092.097:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.105580 kernel: audit: type=1130 audit(1707767092.101:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.094964 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:44:52.108899 kernel: audit: type=1130 audit(1707767092.105:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.099083 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:44:52.102419 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:44:52.106248 systemd[1]: Reached target nss-lookup.target. Feb 12 19:44:52.110408 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:44:52.112568 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:52.114038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:44:52.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.128943 kernel: audit: type=1130 audit(1707767092.124:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.125160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:44:52.131807 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:52.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.143610 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:44:52.144651 kernel: audit: type=1130 audit(1707767092.131:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.145398 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:44:52.149827 kernel: audit: type=1130 audit(1707767092.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.158135 dracut-cmdline[207]: dracut-dracut-053 Feb 12 19:44:52.161818 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:44:52.245826 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:44:52.259818 kernel: iscsi: registered transport (tcp) Feb 12 19:44:52.285823 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:44:52.285904 kernel: QLogic iSCSI HBA Driver Feb 12 19:44:52.345607 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:44:52.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.347619 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:44:52.353270 kernel: audit: type=1130 audit(1707767092.345:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.410854 kernel: raid6: avx2x4 gen() 14197 MB/s Feb 12 19:44:52.427848 kernel: raid6: avx2x4 xor() 8685 MB/s Feb 12 19:44:52.444856 kernel: raid6: avx2x2 gen() 15288 MB/s Feb 12 19:44:52.462047 kernel: raid6: avx2x2 xor() 15745 MB/s Feb 12 19:44:52.478934 kernel: raid6: avx2x1 gen() 9750 MB/s Feb 12 19:44:52.495861 kernel: raid6: avx2x1 xor() 12276 MB/s Feb 12 19:44:52.512877 kernel: raid6: sse2x4 gen() 8967 MB/s Feb 12 19:44:52.529936 kernel: raid6: sse2x4 xor() 5116 MB/s Feb 12 19:44:52.546861 kernel: raid6: sse2x2 gen() 8170 MB/s Feb 12 19:44:52.563866 kernel: raid6: sse2x2 xor() 6169 MB/s Feb 12 19:44:52.580877 kernel: raid6: sse2x1 gen() 7435 MB/s Feb 12 19:44:52.598666 kernel: raid6: sse2x1 xor() 4948 MB/s Feb 12 19:44:52.598759 kernel: raid6: using algorithm avx2x2 gen() 15288 MB/s Feb 12 19:44:52.598810 kernel: raid6: .... xor() 15745 MB/s, rmw enabled Feb 12 19:44:52.599554 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:44:52.618940 kernel: xor: automatically using best checksumming function avx Feb 12 19:44:52.759271 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:44:52.774402 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:44:52.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.774000 audit: BPF prog-id=7 op=LOAD Feb 12 19:44:52.774000 audit: BPF prog-id=8 op=LOAD Feb 12 19:44:52.776274 systemd[1]: Starting systemd-udevd.service... Feb 12 19:44:52.793378 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:44:52.800724 systemd[1]: Started systemd-udevd.service. Feb 12 19:44:52.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.805741 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:44:52.825753 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Feb 12 19:44:52.871120 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:44:52.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:52.873344 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:44:52.930678 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:44:52.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:53.010846 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 12 19:44:53.027817 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:44:53.030695 kernel: scsi host0: Virtio SCSI HBA Feb 12 19:44:53.045671 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:44:53.045750 kernel: GPT:9289727 != 125829119 Feb 12 19:44:53.045768 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:44:53.045799 kernel: GPT:9289727 != 125829119 Feb 12 19:44:53.046356 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:44:53.047468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:44:53.080820 kernel: virtio_blk virtio5: [vdb] 1000 512-byte logical blocks (512 kB/500 KiB) Feb 12 19:44:53.113820 kernel: ACPI: bus type USB registered Feb 12 19:44:53.113893 kernel: usbcore: registered new interface driver usbfs Feb 12 19:44:53.113908 kernel: usbcore: registered new interface driver hub Feb 12 19:44:53.113920 kernel: usbcore: registered new device driver usb Feb 12 19:44:53.129825 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 12 19:44:53.133806 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:44:53.140134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:44:53.141133 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (437) Feb 12 19:44:53.146812 kernel: ehci-pci: EHCI PCI platform driver Feb 12 19:44:53.152817 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 12 19:44:53.162264 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:44:53.237852 kernel: AES CTR mode by8 optimization enabled Feb 12 19:44:53.237910 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 12 19:44:53.238167 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 12 19:44:53.238287 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 12 19:44:53.238540 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 12 19:44:53.238693 kernel: hub 1-0:1.0: USB hub found Feb 12 19:44:53.238912 kernel: hub 1-0:1.0: 2 ports detected Feb 12 19:44:53.239052 kernel: libata version 3.00 loaded. Feb 12 19:44:53.239092 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:44:53.239222 kernel: scsi host1: ata_piix Feb 12 19:44:53.239408 kernel: scsi host2: ata_piix Feb 12 19:44:53.239607 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 12 19:44:53.239623 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 12 19:44:53.243803 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:44:53.244655 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:44:53.249362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:44:53.251246 systemd[1]: Starting disk-uuid.service... Feb 12 19:44:53.260052 disk-uuid[504]: Primary Header is updated. Feb 12 19:44:53.260052 disk-uuid[504]: Secondary Entries is updated. Feb 12 19:44:53.260052 disk-uuid[504]: Secondary Header is updated. Feb 12 19:44:53.276835 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:44:53.286232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:44:53.301285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:44:54.290593 disk-uuid[505]: The operation has completed successfully. Feb 12 19:44:54.291454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:44:54.344725 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:44:54.345632 systemd[1]: Finished disk-uuid.service. Feb 12 19:44:54.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.348093 systemd[1]: Starting verity-setup.service... Feb 12 19:44:54.370812 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:44:54.458437 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:44:54.460659 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:44:54.462360 systemd[1]: Finished verity-setup.service. Feb 12 19:44:54.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.556824 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:44:54.557286 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:44:54.558022 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:44:54.559330 systemd[1]: Starting ignition-setup.service... Feb 12 19:44:54.560761 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:44:54.576840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:44:54.576923 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:44:54.576943 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:44:54.597162 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:44:54.611177 systemd[1]: Finished ignition-setup.service. Feb 12 19:44:54.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.612812 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:44:54.723037 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:44:54.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.724000 audit: BPF prog-id=9 op=LOAD Feb 12 19:44:54.726020 systemd[1]: Starting systemd-networkd.service... Feb 12 19:44:54.755599 systemd-networkd[689]: lo: Link UP Feb 12 19:44:54.755617 systemd-networkd[689]: lo: Gained carrier Feb 12 19:44:54.756314 systemd-networkd[689]: Enumeration completed Feb 12 19:44:54.756818 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:44:54.758085 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 12 19:44:54.759244 systemd-networkd[689]: eth1: Link UP Feb 12 19:44:54.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.759248 systemd-networkd[689]: eth1: Gained carrier Feb 12 19:44:54.760022 systemd[1]: Started systemd-networkd.service. Feb 12 19:44:54.762046 systemd[1]: Reached target network.target. Feb 12 19:44:54.763267 systemd-networkd[689]: eth0: Link UP Feb 12 19:44:54.763274 systemd-networkd[689]: eth0: Gained carrier Feb 12 19:44:54.768174 systemd[1]: Starting iscsiuio.service... Feb 12 19:44:54.781463 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Feb 12 19:44:54.787999 systemd-networkd[689]: eth0: DHCPv4 address 164.90.146.133/20, gateway 164.90.144.1 acquired from 169.254.169.253 Feb 12 19:44:54.790608 systemd[1]: Started iscsiuio.service. Feb 12 19:44:54.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.792520 systemd[1]: Starting iscsid.service... Feb 12 19:44:54.800860 iscsid[694]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:44:54.800860 iscsid[694]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:44:54.800860 iscsid[694]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:44:54.800860 iscsid[694]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:44:54.800860 iscsid[694]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:44:54.800860 iscsid[694]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:44:54.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.800104 systemd[1]: Started iscsid.service. Feb 12 19:44:54.806048 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:44:54.828972 ignition[610]: Ignition 2.14.0 Feb 12 19:44:54.828992 ignition[610]: Stage: fetch-offline Feb 12 19:44:54.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.835479 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:44:54.829111 ignition[610]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:54.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.837297 systemd[1]: Starting ignition-fetch.service... Feb 12 19:44:54.829155 ignition[610]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:54.842212 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:44:54.833941 ignition[610]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:54.842754 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:44:54.834092 ignition[610]: parsed url from cmdline: "" Feb 12 19:44:54.843130 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:44:54.834096 ignition[610]: no config URL provided Feb 12 19:44:54.843475 systemd[1]: Reached target remote-fs.target. Feb 12 19:44:54.834102 ignition[610]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:44:54.848070 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:44:54.834112 ignition[610]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:44:54.834118 ignition[610]: failed to fetch config: resource requires networking Feb 12 19:44:54.834232 ignition[610]: Ignition finished successfully Feb 12 19:44:54.856513 ignition[703]: Ignition 2.14.0 Feb 12 19:44:54.856522 ignition[703]: Stage: fetch Feb 12 19:44:54.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.862040 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:44:54.856767 ignition[703]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:54.856798 ignition[703]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:54.860195 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:54.860333 ignition[703]: parsed url from cmdline: "" Feb 12 19:44:54.860337 ignition[703]: no config URL provided Feb 12 19:44:54.860343 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:44:54.860352 ignition[703]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:44:54.860385 ignition[703]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 12 19:44:54.881376 ignition[703]: GET result: OK Feb 12 19:44:54.881534 ignition[703]: parsing config with SHA512: 7e87f20445b2a1c458eaf1b00f52cfa09573d7eb1489c65f4cfeb281493814e8c5eda78156b5032ce1d0fd7f1adfdf3bbc2c4c174ff814520dfae3950c11f479 Feb 12 19:44:54.943974 unknown[703]: fetched base config from "system" Feb 12 19:44:54.944004 unknown[703]: fetched base config from "system" Feb 12 19:44:54.944015 unknown[703]: fetched user config from "digitalocean" Feb 12 19:44:54.945341 ignition[703]: fetch: fetch complete Feb 12 19:44:54.945352 ignition[703]: fetch: fetch passed Feb 12 19:44:54.945439 ignition[703]: Ignition finished successfully Feb 12 19:44:54.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.949214 systemd[1]: Finished ignition-fetch.service. Feb 12 19:44:54.950739 systemd[1]: Starting ignition-kargs.service... Feb 12 19:44:54.963973 ignition[713]: Ignition 2.14.0 Feb 12 19:44:54.964832 ignition[713]: Stage: kargs Feb 12 19:44:54.965443 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:54.966062 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:54.968341 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:54.973467 ignition[713]: kargs: kargs passed Feb 12 19:44:54.974180 ignition[713]: Ignition finished successfully Feb 12 19:44:54.976340 systemd[1]: Finished ignition-kargs.service. Feb 12 19:44:54.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:54.978425 systemd[1]: Starting ignition-disks.service... Feb 12 19:44:54.993868 ignition[719]: Ignition 2.14.0 Feb 12 19:44:54.994730 ignition[719]: Stage: disks Feb 12 19:44:54.995378 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:54.996099 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:54.998721 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:55.002389 ignition[719]: disks: disks passed Feb 12 19:44:55.003563 ignition[719]: Ignition finished successfully Feb 12 19:44:55.005480 systemd[1]: Finished ignition-disks.service. Feb 12 19:44:55.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.006276 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:44:55.007356 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:44:55.009030 systemd[1]: Reached target local-fs.target. Feb 12 19:44:55.009896 systemd[1]: Reached target sysinit.target. Feb 12 19:44:55.010694 systemd[1]: Reached target basic.target. Feb 12 19:44:55.012837 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:44:55.032847 systemd-fsck[727]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:44:55.041723 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:44:55.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.043486 systemd[1]: Mounting sysroot.mount... Feb 12 19:44:55.056836 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:44:55.057259 systemd[1]: Mounted sysroot.mount. Feb 12 19:44:55.057852 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:44:55.060059 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:44:55.061473 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 12 19:44:55.063402 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:44:55.064304 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:44:55.064351 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:44:55.070492 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:44:55.073967 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:44:55.081545 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:44:55.097324 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:44:55.106591 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:44:55.114123 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:44:55.221679 coreos-metadata[733]: Feb 12 19:44:55.221 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:44:55.224776 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:44:55.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.226261 systemd[1]: Starting ignition-mount.service... Feb 12 19:44:55.227619 systemd[1]: Starting sysroot-boot.service... Feb 12 19:44:55.238879 coreos-metadata[734]: Feb 12 19:44:55.238 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:44:55.245848 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:44:55.253789 coreos-metadata[733]: Feb 12 19:44:55.251 INFO Fetch successful Feb 12 19:44:55.259239 coreos-metadata[734]: Feb 12 19:44:55.257 INFO Fetch successful Feb 12 19:44:55.264406 ignition[786]: INFO : Ignition 2.14.0 Feb 12 19:44:55.264406 ignition[786]: INFO : Stage: mount Feb 12 19:44:55.265535 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:55.265535 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:55.266867 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:55.267956 coreos-metadata[734]: Feb 12 19:44:55.267 INFO wrote hostname ci-3510.3.2-3-61711c62be to /sysroot/etc/hostname Feb 12 19:44:55.269553 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 12 19:44:55.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.269660 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 12 19:44:55.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.270750 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:44:55.277087 systemd[1]: Finished sysroot-boot.service. Feb 12 19:44:55.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.281245 ignition[786]: INFO : mount: mount passed Feb 12 19:44:55.281245 ignition[786]: INFO : Ignition finished successfully Feb 12 19:44:55.283358 systemd[1]: Finished ignition-mount.service. Feb 12 19:44:55.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:55.480467 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:44:55.491807 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (795) Feb 12 19:44:55.506516 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:44:55.506591 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:44:55.506604 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:44:55.511604 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:44:55.513770 systemd[1]: Starting ignition-files.service... Feb 12 19:44:55.541812 ignition[815]: INFO : Ignition 2.14.0 Feb 12 19:44:55.542799 ignition[815]: INFO : Stage: files Feb 12 19:44:55.543591 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:55.544255 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:55.548288 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:55.554118 ignition[815]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:44:55.555007 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:44:55.555007 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:44:55.560914 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:44:55.562176 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:44:55.562853 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:44:55.562309 unknown[815]: wrote ssh authorized keys file for user: core Feb 12 19:44:55.563970 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:44:55.563970 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:55.591662 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:44:55.655082 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:44:55.655959 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:44:55.656740 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:44:55.854044 systemd-networkd[689]: eth1: Gained IPv6LL Feb 12 19:44:56.046138 systemd-networkd[689]: eth0: Gained IPv6LL Feb 12 19:44:56.113838 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:44:56.301651 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:44:56.301651 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:44:56.303614 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:44:56.303614 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:56.696052 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:44:56.808568 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:44:56.810080 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:44:56.810080 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:44:56.810080 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:44:56.810080 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:44:56.810080 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:44:56.872746 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:44:57.124242 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:44:57.125415 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:44:57.126170 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:44:57.127035 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:44:57.175469 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:44:57.436178 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 19:44:57.437594 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:44:57.438444 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:44:57.439217 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:44:57.486641 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:44:58.096986 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:44:58.099049 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:44:58.099049 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:44:58.099049 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:44:58.099049 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:44:58.099049 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:58.517814 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 19:44:58.910639 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:44:58.910639 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:44:58.913000 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(12): [started] processing unit "containerd.service" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(12): [finished] processing unit "containerd.service" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:44:58.913000 ignition[815]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:44:58.930229 ignition[815]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:44:58.981900 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 12 19:44:58.981932 kernel: audit: type=1130 audit(1707767098.934:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.981946 kernel: audit: type=1130 audit(1707767098.962:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.981963 kernel: audit: type=1131 audit(1707767098.962:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.981980 kernel: audit: type=1130 audit(1707767098.970:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.982208 ignition[815]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:44:58.982208 ignition[815]: INFO : files: files passed Feb 12 19:44:58.982208 ignition[815]: INFO : Ignition finished successfully Feb 12 19:44:58.933083 systemd[1]: Finished ignition-files.service. Feb 12 19:44:58.937387 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:44:58.986416 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:44:58.955097 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:44:58.956461 systemd[1]: Starting ignition-quench.service... Feb 12 19:44:58.961594 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:44:58.961729 systemd[1]: Finished ignition-quench.service. Feb 12 19:44:58.963399 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:44:58.971257 systemd[1]: Reached target ignition-complete.target. Feb 12 19:44:58.977028 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:44:58.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.999213 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:44:59.007473 kernel: audit: type=1130 audit(1707767098.999:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.007511 kernel: audit: type=1131 audit(1707767098.999:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:58.999364 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:44:59.000095 systemd[1]: Reached target initrd-fs.target. Feb 12 19:44:59.006993 systemd[1]: Reached target initrd.target. Feb 12 19:44:59.007893 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:44:59.009289 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:44:59.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.028122 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:44:59.032332 kernel: audit: type=1130 audit(1707767099.028:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.032828 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:44:59.046649 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:44:59.047857 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:44:59.048901 systemd[1]: Stopped target timers.target. Feb 12 19:44:59.049714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:44:59.050419 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:44:59.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.051580 systemd[1]: Stopped target initrd.target. Feb 12 19:44:59.054905 kernel: audit: type=1131 audit(1707767099.050:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.055183 systemd[1]: Stopped target basic.target. Feb 12 19:44:59.056052 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:44:59.057248 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:44:59.058590 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:44:59.059453 systemd[1]: Stopped target remote-fs.target. Feb 12 19:44:59.060397 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:44:59.061295 systemd[1]: Stopped target sysinit.target. Feb 12 19:44:59.062163 systemd[1]: Stopped target local-fs.target. Feb 12 19:44:59.063012 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:44:59.063851 systemd[1]: Stopped target swap.target. Feb 12 19:44:59.064653 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:44:59.068760 kernel: audit: type=1131 audit(1707767099.064:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.064802 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:44:59.065702 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:44:59.073050 kernel: audit: type=1131 audit(1707767099.069:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.069263 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:44:59.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.069419 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:44:59.070056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:44:59.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.070210 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:44:59.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.073853 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:44:59.074088 systemd[1]: Stopped ignition-files.service. Feb 12 19:44:59.074955 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:44:59.075059 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:44:59.078335 iscsid[694]: iscsid shutting down. Feb 12 19:44:59.076979 systemd[1]: Stopping ignition-mount.service... Feb 12 19:44:59.082069 systemd[1]: Stopping iscsid.service... Feb 12 19:44:59.082547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:44:59.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.082708 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:44:59.084454 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:44:59.091344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:44:59.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.091547 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:44:59.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.094128 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:44:59.101000 ignition[853]: INFO : Ignition 2.14.0 Feb 12 19:44:59.101000 ignition[853]: INFO : Stage: umount Feb 12 19:44:59.101000 ignition[853]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:59.101000 ignition[853]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:44:59.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.094286 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:44:59.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.113421 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:44:59.113421 ignition[853]: INFO : umount: umount passed Feb 12 19:44:59.113421 ignition[853]: INFO : Ignition finished successfully Feb 12 19:44:59.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.098908 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:44:59.099051 systemd[1]: Stopped iscsid.service. Feb 12 19:44:59.102762 systemd[1]: Stopping iscsiuio.service... Feb 12 19:44:59.107519 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:44:59.107662 systemd[1]: Stopped iscsiuio.service. Feb 12 19:44:59.113299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:44:59.113405 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:44:59.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.114164 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:44:59.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.114308 systemd[1]: Stopped ignition-mount.service. Feb 12 19:44:59.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.126075 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:44:59.127086 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:44:59.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.127146 systemd[1]: Stopped ignition-disks.service. Feb 12 19:44:59.138350 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:44:59.138415 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:44:59.139177 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:44:59.139239 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:44:59.139942 systemd[1]: Stopped target network.target. Feb 12 19:44:59.140872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:44:59.140987 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:44:59.141880 systemd[1]: Stopped target paths.target. Feb 12 19:44:59.142544 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:44:59.145892 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:44:59.146587 systemd[1]: Stopped target slices.target. Feb 12 19:44:59.147326 systemd[1]: Stopped target sockets.target. Feb 12 19:44:59.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.148060 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:44:59.148113 systemd[1]: Closed iscsid.socket. Feb 12 19:44:59.149028 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:44:59.149085 systemd[1]: Closed iscsiuio.socket. Feb 12 19:44:59.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.149728 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:44:59.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.149800 systemd[1]: Stopped ignition-setup.service. Feb 12 19:44:59.150501 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:44:59.151355 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:44:59.152466 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:44:59.152564 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:44:59.153366 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:44:59.153412 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:44:59.153841 systemd-networkd[689]: eth0: DHCPv6 lease lost Feb 12 19:44:59.157887 systemd-networkd[689]: eth1: DHCPv6 lease lost Feb 12 19:44:59.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.159059 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:44:59.159182 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:44:59.162000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:44:59.160369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:44:59.160411 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:44:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.162150 systemd[1]: Stopping network-cleanup.service... Feb 12 19:44:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.165076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:44:59.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.165151 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:44:59.166011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:44:59.166063 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:44:59.166897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:44:59.166950 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:44:59.171771 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:44:59.173709 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:44:59.174309 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:44:59.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.175576 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:44:59.177295 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:44:59.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.177433 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:44:59.178000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:44:59.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.179634 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:44:59.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.179694 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:44:59.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.180133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:44:59.180189 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:44:59.180579 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:44:59.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.180627 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:44:59.181142 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:44:59.181183 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:44:59.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.181533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:44:59.181567 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:44:59.183312 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:44:59.183786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:44:59.183856 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:44:59.185563 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:44:59.185697 systemd[1]: Stopped network-cleanup.service. Feb 12 19:44:59.194112 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:44:59.194226 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:44:59.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:59.195506 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:44:59.197840 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:44:59.209000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:44:59.209000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:44:59.208748 systemd[1]: Switching root. Feb 12 19:44:59.209000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:44:59.209000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:44:59.209000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:44:59.231336 systemd-journald[183]: Journal stopped Feb 12 19:45:05.528656 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 12 19:45:05.528769 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:45:05.546954 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:45:05.547000 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:45:05.547019 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:45:05.547038 kernel: SELinux: policy capability open_perms=1 Feb 12 19:45:05.547109 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:45:05.547135 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:45:05.547168 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:45:05.547191 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:45:05.547227 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:45:05.547244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:45:05.547283 systemd[1]: Successfully loaded SELinux policy in 55.101ms. Feb 12 19:45:05.547314 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.135ms. Feb 12 19:45:05.547335 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:45:05.547378 systemd[1]: Detected virtualization kvm. Feb 12 19:45:05.547410 systemd[1]: Detected architecture x86-64. Feb 12 19:45:05.547440 systemd[1]: Detected first boot. Feb 12 19:45:05.547459 systemd[1]: Hostname set to . Feb 12 19:45:05.547497 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:45:05.547520 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:45:05.547542 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:45:05.547572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:05.547594 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:05.547619 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:05.547644 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:45:05.547668 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:45:05.547693 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:45:05.547712 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:45:05.547730 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 19:45:05.547758 systemd[1]: Created slice system-getty.slice. Feb 12 19:45:05.547801 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:45:05.547819 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:45:05.547885 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:45:05.547905 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:45:05.547923 systemd[1]: Created slice user.slice. Feb 12 19:45:05.547940 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:45:05.547958 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:45:05.547978 systemd[1]: Set up automount boot.automount. Feb 12 19:45:05.548009 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:45:05.548032 systemd[1]: Reached target integritysetup.target. Feb 12 19:45:05.548049 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:45:05.548184 systemd[1]: Reached target remote-fs.target. Feb 12 19:45:05.548211 systemd[1]: Reached target slices.target. Feb 12 19:45:05.548231 systemd[1]: Reached target swap.target. Feb 12 19:45:05.548250 systemd[1]: Reached target torcx.target. Feb 12 19:45:05.548281 systemd[1]: Reached target veritysetup.target. Feb 12 19:45:05.548299 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:45:05.548318 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:45:05.603646 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:45:05.603852 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 12 19:45:05.603885 kernel: audit: type=1400 audit(1707767105.043:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:45:05.603908 kernel: audit: type=1335 audit(1707767105.043:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:45:05.603929 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:45:05.603965 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:45:05.603979 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:45:05.604034 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:45:05.604047 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:45:05.604391 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:45:05.604445 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:45:05.604465 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:45:05.604486 systemd[1]: Mounting media.mount... Feb 12 19:45:05.604506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:45:05.604526 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:45:05.604560 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:45:05.604579 systemd[1]: Mounting tmp.mount... Feb 12 19:45:05.604598 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:45:05.604618 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:45:05.604636 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:45:05.604657 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:45:05.604679 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:45:05.604698 systemd[1]: Starting modprobe@drm.service... Feb 12 19:45:05.604716 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:45:05.604751 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:45:05.604813 systemd[1]: Starting modprobe@loop.service... Feb 12 19:45:05.604845 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:45:05.604866 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:45:05.604886 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:45:05.604912 systemd[1]: Starting systemd-journald.service... Feb 12 19:45:05.605018 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:45:05.605049 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:45:05.605068 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:45:05.605088 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:45:05.605107 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:45:05.605127 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:45:05.605147 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:45:05.605165 systemd[1]: Mounted media.mount. Feb 12 19:45:05.605200 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:45:05.605221 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:45:05.605242 systemd[1]: Mounted tmp.mount. Feb 12 19:45:05.605264 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:45:05.605286 kernel: audit: type=1130 audit(1707767105.355:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605309 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:45:05.605329 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:45:05.605349 kernel: audit: type=1130 audit(1707767105.363:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605367 kernel: audit: type=1131 audit(1707767105.369:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:45:05.605417 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:45:05.605436 kernel: audit: type=1130 audit(1707767105.384:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605457 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:45:05.605476 systemd[1]: Finished modprobe@drm.service. Feb 12 19:45:05.605498 kernel: audit: type=1131 audit(1707767105.390:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:45:05.605542 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:45:05.605572 kernel: audit: type=1130 audit(1707767105.399:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605593 kernel: audit: type=1131 audit(1707767105.399:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605614 kernel: audit: type=1130 audit(1707767105.415:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.605632 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:45:05.605651 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:45:05.605671 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:45:05.605724 systemd[1]: Reached target network-pre.target. Feb 12 19:45:05.605745 kernel: loop: module loaded Feb 12 19:45:05.605766 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:45:05.605844 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:45:05.605868 kernel: fuse: init (API version 7.34) Feb 12 19:45:05.605889 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:45:05.605921 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:45:05.605941 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:45:05.605960 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:45:05.605986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:45:05.606032 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:45:05.606052 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:45:05.606072 systemd[1]: Finished modprobe@loop.service. Feb 12 19:45:05.606097 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:45:05.606118 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:45:05.606138 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:45:05.606170 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:45:05.606206 systemd-journald[984]: Journal started Feb 12 19:45:05.606313 systemd-journald[984]: Runtime Journal (/run/log/journal/85aaea18ba2544f79d384ce5be4cc7d3) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:45:05.043000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:45:05.625670 systemd[1]: Started systemd-journald.service. Feb 12 19:45:05.043000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:45:05.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.525000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:45:05.525000 audit[984]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdb4d8f4d0 a2=4000 a3=7ffdb4d8f56c items=0 ppid=1 pid=984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:05.525000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:45:05.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.624288 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:45:05.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.633705 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:45:05.634527 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:45:05.662006 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:45:05.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.673969 systemd-journald[984]: Time spent on flushing to /var/log/journal/85aaea18ba2544f79d384ce5be4cc7d3 is 93.210ms for 1138 entries. Feb 12 19:45:05.673969 systemd-journald[984]: System Journal (/var/log/journal/85aaea18ba2544f79d384ce5be4cc7d3) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:45:05.782042 systemd-journald[984]: Received client request to flush runtime journal. Feb 12 19:45:05.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.752210 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:45:05.756496 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:45:05.787338 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:45:05.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.790185 udevadm[1036]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:45:05.795949 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:45:05.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.799266 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:45:05.919880 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:45:05.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:05.923090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:45:06.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:06.097691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:45:07.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.170540 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:45:07.173407 systemd[1]: Starting systemd-udevd.service... Feb 12 19:45:07.226532 systemd-udevd[1050]: Using default interface naming scheme 'v252'. Feb 12 19:45:07.325133 systemd[1]: Started systemd-udevd.service. Feb 12 19:45:07.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.336766 systemd[1]: Starting systemd-networkd.service... Feb 12 19:45:07.355911 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:45:07.424102 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:45:07.433077 systemd[1]: Started systemd-userdbd.service. Feb 12 19:45:07.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.487447 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:45:07.489333 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:45:07.491500 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:45:07.494720 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:45:07.498435 systemd[1]: Starting modprobe@loop.service... Feb 12 19:45:07.499970 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:45:07.500131 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:45:07.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.500288 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:45:07.501118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:45:07.501464 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:45:07.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.519087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:45:07.519357 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:45:07.520196 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:45:07.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.521548 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:45:07.521833 systemd[1]: Finished modprobe@loop.service. Feb 12 19:45:07.522872 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:45:07.571245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:45:07.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:07.620277 systemd-networkd[1067]: lo: Link UP Feb 12 19:45:07.620295 systemd-networkd[1067]: lo: Gained carrier Feb 12 19:45:07.621693 systemd-networkd[1067]: Enumeration completed Feb 12 19:45:07.622056 systemd[1]: Started systemd-networkd.service. Feb 12 19:45:07.622274 systemd-networkd[1067]: eth1: Configuring with /run/systemd/network/10-02:1a:e3:47:40:2e.network. Feb 12 19:45:07.627757 systemd-networkd[1067]: eth0: Configuring with /run/systemd/network/10-ee:86:c2:47:77:4d.network. Feb 12 19:45:07.629410 systemd-networkd[1067]: eth1: Link UP Feb 12 19:45:07.629418 systemd-networkd[1067]: eth1: Gained carrier Feb 12 19:45:07.639389 systemd-networkd[1067]: eth0: Link UP Feb 12 19:45:07.639403 systemd-networkd[1067]: eth0: Gained carrier Feb 12 19:45:07.680827 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:45:07.697851 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:45:07.656000 audit[1059]: AVC avc: denied { confidentiality } for pid=1059 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:45:07.656000 audit[1059]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bf1fc2abb0 a1=32194 a2=7f7e642e0bc5 a3=5 items=108 ppid=1050 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:07.656000 audit: CWD cwd="/" Feb 12 19:45:07.656000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=1 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=2 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=3 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=4 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=5 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=6 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=7 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=8 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=9 name=(null) inode=14568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=10 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=11 name=(null) inode=14569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=12 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=13 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=14 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=15 name=(null) inode=14571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=16 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=17 name=(null) inode=14572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=18 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=19 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=20 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=21 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=22 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=23 name=(null) inode=14575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=24 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=25 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=26 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=27 name=(null) inode=14577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=28 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=29 name=(null) inode=14578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=30 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=31 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=32 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=33 name=(null) inode=14580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=34 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=35 name=(null) inode=14581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=36 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=37 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=38 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=39 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=40 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=41 name=(null) inode=14584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=42 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=43 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=44 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=45 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=46 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=47 name=(null) inode=14587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=48 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=49 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=50 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=51 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=52 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=53 name=(null) inode=14590 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=55 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=56 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=57 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=58 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=59 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=60 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=61 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=62 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=63 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=64 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=65 name=(null) inode=14596 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=66 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=67 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=68 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=69 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=70 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=71 name=(null) inode=14599 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=72 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=73 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=74 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=75 name=(null) inode=14601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=76 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=77 name=(null) inode=14602 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=78 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=79 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=80 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=81 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=82 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=83 name=(null) inode=14605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=84 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=85 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=86 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=87 name=(null) inode=14607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=88 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=89 name=(null) inode=14608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=90 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=91 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=92 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=93 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=94 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=95 name=(null) inode=14611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=96 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=97 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=98 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=99 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=100 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=101 name=(null) inode=14614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=102 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=103 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=104 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=105 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=106 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PATH item=107 name=(null) inode=14617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:45:07.656000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:45:07.736567 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 19:45:07.811897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:45:07.821816 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:45:08.016998 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:45:08.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.050605 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:45:08.065536 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:45:08.106634 lvm[1094]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:45:08.144520 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:45:08.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.145726 systemd[1]: Reached target cryptsetup.target. Feb 12 19:45:08.149209 systemd[1]: Starting lvm2-activation.service... Feb 12 19:45:08.159208 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:45:08.196418 systemd[1]: Finished lvm2-activation.service. Feb 12 19:45:08.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.197255 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:45:08.200868 systemd[1]: Mounting media-configdrive.mount... Feb 12 19:45:08.201621 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:45:08.201709 systemd[1]: Reached target machines.target. Feb 12 19:45:08.204801 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:45:08.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.231835 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:45:08.248879 kernel: ISO 9660 Extensions: RRIP_1991A Feb 12 19:45:08.249412 systemd[1]: Mounted media-configdrive.mount. Feb 12 19:45:08.250309 systemd[1]: Reached target local-fs.target. Feb 12 19:45:08.254880 systemd[1]: Starting ldconfig.service... Feb 12 19:45:08.258191 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:45:08.258632 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:45:08.265915 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:45:08.272019 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:45:08.277033 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:45:08.277150 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:45:08.285086 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:45:08.294137 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Feb 12 19:45:08.296966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:45:08.325061 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:45:08.337392 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:45:08.346342 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:45:08.430696 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:45:08.433705 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:45:08.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.579113 systemd-fsck[1112]: fsck.fat 4.2 (2021-01-31) Feb 12 19:45:08.579113 systemd-fsck[1112]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 19:45:08.587062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:45:08.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.591533 systemd[1]: Mounting boot.mount... Feb 12 19:45:08.639200 systemd[1]: Mounted boot.mount. Feb 12 19:45:08.692290 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:45:08.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.846441 systemd-networkd[1067]: eth0: Gained IPv6LL Feb 12 19:45:08.875465 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:45:08.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.879309 systemd[1]: Starting audit-rules.service... Feb 12 19:45:08.882685 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:45:08.893488 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:45:08.906868 systemd[1]: Starting systemd-resolved.service... Feb 12 19:45:08.911051 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:45:08.929582 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:45:08.931540 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:45:08.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.933574 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:45:08.955000 audit[1127]: SYSTEM_BOOT pid=1127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:08.960386 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:45:09.028200 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:45:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:45:09.106000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:45:09.106000 audit[1143]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd02c899b0 a2=420 a3=0 items=0 ppid=1120 pid=1143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:45:09.106000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:45:09.108288 augenrules[1143]: No rules Feb 12 19:45:09.109398 systemd[1]: Finished audit-rules.service. Feb 12 19:45:09.161967 systemd-resolved[1124]: Positive Trust Anchors: Feb 12 19:45:09.162530 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:45:09.162931 systemd-resolved[1124]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:45:09.163081 systemd-resolved[1124]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:45:09.175472 systemd-resolved[1124]: Using system hostname 'ci-3510.3.2-3-61711c62be'. Feb 12 19:45:09.179829 systemd[1]: Started systemd-resolved.service. Feb 12 19:45:09.180518 systemd[1]: Reached target network.target. Feb 12 19:45:09.181327 systemd[1]: Reached target nss-lookup.target. Feb 12 19:45:09.183123 systemd[1]: Finished ldconfig.service. Feb 12 19:45:09.186436 systemd[1]: Starting systemd-update-done.service... Feb 12 19:45:09.206252 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:45:09.207013 systemd[1]: Reached target time-set.target. Feb 12 19:45:09.209867 systemd[1]: Finished systemd-update-done.service. Feb 12 19:45:09.210483 systemd[1]: Reached target sysinit.target. Feb 12 19:45:09.211500 systemd[1]: Started motdgen.path. Feb 12 19:45:09.212563 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:45:09.216495 systemd[1]: Started logrotate.timer. Feb 12 19:45:09.217299 systemd[1]: Started mdadm.timer. Feb 12 19:45:09.217758 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:45:09.218305 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:45:09.218363 systemd[1]: Reached target paths.target. Feb 12 19:45:09.218863 systemd[1]: Reached target timers.target. Feb 12 19:45:09.219870 systemd[1]: Listening on dbus.socket. Feb 12 19:45:09.223005 systemd[1]: Starting docker.socket... Feb 12 19:45:09.227122 systemd[1]: Listening on sshd.socket. Feb 12 19:45:09.227917 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:45:09.229250 systemd[1]: Listening on docker.socket. Feb 12 19:45:09.233079 systemd[1]: Reached target sockets.target. Feb 12 19:45:09.238500 systemd[1]: Reached target basic.target. Feb 12 19:45:10.004438 systemd-resolved[1124]: Clock change detected. Flushing caches. Feb 12 19:45:10.004525 systemd-timesyncd[1125]: Contacted time server 168.235.86.33:123 (0.flatcar.pool.ntp.org). Feb 12 19:45:10.004614 systemd-timesyncd[1125]: Initial clock synchronization to Mon 2024-02-12 19:45:10.004302 UTC. Feb 12 19:45:10.005614 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:45:10.005718 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:45:10.005756 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:45:10.010282 systemd[1]: Starting containerd.service... Feb 12 19:45:10.015976 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 19:45:10.023120 systemd[1]: Starting dbus.service... Feb 12 19:45:10.028477 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:45:10.034105 systemd[1]: Starting extend-filesystems.service... Feb 12 19:45:10.034840 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:45:10.039323 systemd[1]: Starting motdgen.service... Feb 12 19:45:10.043796 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:45:10.139747 jq[1161]: false Feb 12 19:45:10.050740 systemd[1]: Starting prepare-critools.service... Feb 12 19:45:10.056357 systemd[1]: Starting prepare-helm.service... Feb 12 19:45:10.059806 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:45:10.066889 systemd[1]: Starting sshd-keygen.service... Feb 12 19:45:10.075655 systemd[1]: Starting systemd-logind.service... Feb 12 19:45:10.076272 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:45:10.076376 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:45:10.078697 systemd[1]: Starting update-engine.service... Feb 12 19:45:10.084289 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:45:10.206677 tar[1180]: ./ Feb 12 19:45:10.206677 tar[1180]: ./macvlan Feb 12 19:45:10.100830 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:45:10.105729 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:45:10.220290 jq[1176]: true Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda1 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda2 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda3 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found usr Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda4 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda6 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda7 Feb 12 19:45:10.221960 extend-filesystems[1162]: Found vda9 Feb 12 19:45:10.168522 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:45:10.258743 tar[1187]: crictl Feb 12 19:45:10.269622 extend-filesystems[1162]: Checking size of /dev/vda9 Feb 12 19:45:10.168869 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:45:10.273031 jq[1185]: true Feb 12 19:45:10.187330 systemd-networkd[1067]: eth1: Gained IPv6LL Feb 12 19:45:10.270677 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:45:10.275880 tar[1183]: linux-amd64/helm Feb 12 19:45:10.271021 systemd[1]: Finished motdgen.service. Feb 12 19:45:10.278720 dbus-daemon[1159]: [system] SELinux support is enabled Feb 12 19:45:10.279064 systemd[1]: Started dbus.service. Feb 12 19:45:10.283178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:45:10.283309 systemd[1]: Reached target system-config.target. Feb 12 19:45:10.284012 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:45:10.290644 systemd[1]: Starting user-configdrive.service... Feb 12 19:45:10.340189 update_engine[1174]: I0212 19:45:10.339519 1174 main.cc:92] Flatcar Update Engine starting Feb 12 19:45:10.377305 update_engine[1174]: I0212 19:45:10.377208 1174 update_check_scheduler.cc:74] Next update check in 2m49s Feb 12 19:45:10.377612 systemd[1]: Started update-engine.service. Feb 12 19:45:10.387738 extend-filesystems[1162]: Resized partition /dev/vda9 Feb 12 19:45:10.383242 systemd[1]: Started locksmithd.service. Feb 12 19:45:10.391990 bash[1219]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:45:10.390887 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:45:10.406442 extend-filesystems[1228]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:45:10.414242 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 12 19:45:10.550712 coreos-cloudinit[1209]: 2024/02/12 19:45:10 Checking availability of "cloud-drive" Feb 12 19:45:10.550712 coreos-cloudinit[1209]: 2024/02/12 19:45:10 Fetching user-data from datasource of type "cloud-drive" Feb 12 19:45:10.550712 coreos-cloudinit[1209]: 2024/02/12 19:45:10 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 12 19:45:10.550712 coreos-cloudinit[1209]: 2024/02/12 19:45:10 Fetching meta-data from datasource of type "cloud-drive" Feb 12 19:45:10.550712 coreos-cloudinit[1209]: 2024/02/12 19:45:10 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 12 19:45:10.577254 coreos-cloudinit[1209]: Detected an Ignition config. Exiting... Feb 12 19:45:10.578084 systemd[1]: Finished user-configdrive.service. Feb 12 19:45:10.578890 systemd[1]: Reached target user-config.target. Feb 12 19:45:10.592074 env[1189]: time="2024-02-12T19:45:10.591987174Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:45:10.604751 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 12 19:45:10.647711 extend-filesystems[1228]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:45:10.647711 extend-filesystems[1228]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 12 19:45:10.647711 extend-filesystems[1228]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 12 19:45:10.659406 extend-filesystems[1162]: Resized filesystem in /dev/vda9 Feb 12 19:45:10.659406 extend-filesystems[1162]: Found vdb Feb 12 19:45:10.654086 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:45:10.654898 systemd[1]: Finished extend-filesystems.service. Feb 12 19:45:10.676967 systemd-logind[1172]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:45:10.677004 systemd-logind[1172]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:45:10.678579 systemd-logind[1172]: New seat seat0. Feb 12 19:45:10.692960 systemd[1]: Started systemd-logind.service. Feb 12 19:45:10.703871 tar[1180]: ./static Feb 12 19:45:10.788619 env[1189]: time="2024-02-12T19:45:10.788531838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:45:10.789190 env[1189]: time="2024-02-12T19:45:10.789143777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.796642 env[1189]: time="2024-02-12T19:45:10.796572038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:45:10.799825 coreos-metadata[1157]: Feb 12 19:45:10.799 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:45:10.803383 env[1189]: time="2024-02-12T19:45:10.803263727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.805969 env[1189]: time="2024-02-12T19:45:10.805903684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:45:10.806377 env[1189]: time="2024-02-12T19:45:10.806340352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.806538 env[1189]: time="2024-02-12T19:45:10.806511753Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:45:10.806634 env[1189]: time="2024-02-12T19:45:10.806612012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.806913 env[1189]: time="2024-02-12T19:45:10.806881814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.807573 env[1189]: time="2024-02-12T19:45:10.807535285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:45:10.808874 env[1189]: time="2024-02-12T19:45:10.808827501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:45:10.811340 env[1189]: time="2024-02-12T19:45:10.811286420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:45:10.818656 env[1189]: time="2024-02-12T19:45:10.818560897Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:45:10.819074 coreos-metadata[1157]: Feb 12 19:45:10.819 INFO Fetch successful Feb 12 19:45:10.823340 env[1189]: time="2024-02-12T19:45:10.823255988Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:45:10.839567 unknown[1157]: wrote ssh authorized keys file for user: core Feb 12 19:45:10.842455 env[1189]: time="2024-02-12T19:45:10.839404075Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:45:10.842455 env[1189]: time="2024-02-12T19:45:10.842161407Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:45:10.842455 env[1189]: time="2024-02-12T19:45:10.842189676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:45:10.842455 env[1189]: time="2024-02-12T19:45:10.842295972Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842325554Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842884326Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842923175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842951032Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842975323Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.842996749Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.843015593Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.843257 env[1189]: time="2024-02-12T19:45:10.843066910Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:45:10.844967 env[1189]: time="2024-02-12T19:45:10.844035973Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:45:10.844967 env[1189]: time="2024-02-12T19:45:10.844263443Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:45:10.844967 env[1189]: time="2024-02-12T19:45:10.844786754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:45:10.844967 env[1189]: time="2024-02-12T19:45:10.844836572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.844967 env[1189]: time="2024-02-12T19:45:10.844859436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848748326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848815424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848839941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848880636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848903563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848926431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848946759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848966624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.848991577Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.849344353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.849382110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.849403304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.849425806Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:45:10.855299 env[1189]: time="2024-02-12T19:45:10.849451527Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:45:10.856354 env[1189]: time="2024-02-12T19:45:10.849471725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:45:10.856354 env[1189]: time="2024-02-12T19:45:10.849507811Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:45:10.856354 env[1189]: time="2024-02-12T19:45:10.849561249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:45:10.858166 env[1189]: time="2024-02-12T19:45:10.849846805Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:45:10.858166 env[1189]: time="2024-02-12T19:45:10.849936382Z" level=info msg="Connect containerd service" Feb 12 19:45:10.858166 env[1189]: time="2024-02-12T19:45:10.850010354Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:45:10.863817 env[1189]: time="2024-02-12T19:45:10.860053094Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:45:10.863817 env[1189]: time="2024-02-12T19:45:10.861717096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:45:10.863817 env[1189]: time="2024-02-12T19:45:10.861812420Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:45:10.862128 systemd[1]: Started containerd.service. Feb 12 19:45:10.877564 env[1189]: time="2024-02-12T19:45:10.876654830Z" level=info msg="containerd successfully booted in 0.304137s" Feb 12 19:45:10.883509 tar[1180]: ./vlan Feb 12 19:45:10.888309 update-ssh-keys[1242]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:45:10.889507 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 19:45:10.907304 env[1189]: time="2024-02-12T19:45:10.907136906Z" level=info msg="Start subscribing containerd event" Feb 12 19:45:10.909410 env[1189]: time="2024-02-12T19:45:10.909356825Z" level=info msg="Start recovering state" Feb 12 19:45:10.917636 env[1189]: time="2024-02-12T19:45:10.917588653Z" level=info msg="Start event monitor" Feb 12 19:45:10.917880 env[1189]: time="2024-02-12T19:45:10.917852171Z" level=info msg="Start snapshots syncer" Feb 12 19:45:10.917986 env[1189]: time="2024-02-12T19:45:10.917966364Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:45:10.918102 env[1189]: time="2024-02-12T19:45:10.918067146Z" level=info msg="Start streaming server" Feb 12 19:45:11.010748 tar[1180]: ./portmap Feb 12 19:45:11.123925 tar[1180]: ./host-local Feb 12 19:45:11.212971 tar[1180]: ./vrf Feb 12 19:45:11.290582 tar[1180]: ./bridge Feb 12 19:45:11.434227 tar[1180]: ./tuning Feb 12 19:45:11.540101 tar[1180]: ./firewall Feb 12 19:45:11.680188 tar[1180]: ./host-device Feb 12 19:45:11.857220 tar[1180]: ./sbr Feb 12 19:45:11.951008 tar[1180]: ./loopback Feb 12 19:45:12.047185 tar[1180]: ./dhcp Feb 12 19:45:12.191134 systemd[1]: Finished prepare-critools.service. Feb 12 19:45:12.229253 tar[1183]: linux-amd64/LICENSE Feb 12 19:45:12.232353 tar[1183]: linux-amd64/README.md Feb 12 19:45:12.241715 systemd[1]: Finished prepare-helm.service. Feb 12 19:45:12.274888 tar[1180]: ./ptp Feb 12 19:45:12.326617 tar[1180]: ./ipvlan Feb 12 19:45:12.372529 tar[1180]: ./bandwidth Feb 12 19:45:12.447142 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:45:12.554777 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:45:13.430846 sshd_keygen[1201]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:45:13.481625 systemd[1]: Finished sshd-keygen.service. Feb 12 19:45:13.485962 systemd[1]: Starting issuegen.service... Feb 12 19:45:13.498781 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:45:13.499233 systemd[1]: Finished issuegen.service. Feb 12 19:45:13.502185 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:45:13.516972 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:45:13.520868 systemd[1]: Started getty@tty1.service. Feb 12 19:45:13.524424 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:45:13.525720 systemd[1]: Reached target getty.target. Feb 12 19:45:13.526489 systemd[1]: Reached target multi-user.target. Feb 12 19:45:13.530153 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:45:13.542069 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:45:13.542653 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:45:13.553670 systemd[1]: Startup finished in 8.613s (kernel) + 13.410s (userspace) = 22.024s. Feb 12 19:45:19.804728 systemd[1]: Created slice system-sshd.slice. Feb 12 19:45:19.806543 systemd[1]: Started sshd@0-164.90.146.133:22-139.178.68.195:41890.service. Feb 12 19:45:19.877140 sshd[1282]: Accepted publickey for core from 139.178.68.195 port 41890 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:19.880323 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:19.894009 systemd[1]: Created slice user-500.slice. Feb 12 19:45:19.895915 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:45:19.904310 systemd-logind[1172]: New session 1 of user core. Feb 12 19:45:19.909927 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:45:19.912724 systemd[1]: Starting user@500.service... Feb 12 19:45:19.923297 (systemd)[1287]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:20.028785 systemd[1287]: Queued start job for default target default.target. Feb 12 19:45:20.029633 systemd[1287]: Reached target paths.target. Feb 12 19:45:20.029677 systemd[1287]: Reached target sockets.target. Feb 12 19:45:20.029698 systemd[1287]: Reached target timers.target. Feb 12 19:45:20.029717 systemd[1287]: Reached target basic.target. Feb 12 19:45:20.029812 systemd[1287]: Reached target default.target. Feb 12 19:45:20.029858 systemd[1287]: Startup finished in 96ms. Feb 12 19:45:20.030285 systemd[1]: Started user@500.service. Feb 12 19:45:20.031649 systemd[1]: Started session-1.scope. Feb 12 19:45:20.093155 systemd[1]: Started sshd@1-164.90.146.133:22-139.178.68.195:41898.service. Feb 12 19:45:20.150463 sshd[1296]: Accepted publickey for core from 139.178.68.195 port 41898 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:20.153246 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:20.158946 systemd-logind[1172]: New session 2 of user core. Feb 12 19:45:20.160403 systemd[1]: Started session-2.scope. Feb 12 19:45:20.227579 sshd[1296]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:20.231745 systemd[1]: sshd@1-164.90.146.133:22-139.178.68.195:41898.service: Deactivated successfully. Feb 12 19:45:20.232894 systemd-logind[1172]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:45:20.234628 systemd[1]: Started sshd@2-164.90.146.133:22-139.178.68.195:41900.service. Feb 12 19:45:20.235334 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:45:20.235927 systemd-logind[1172]: Removed session 2. Feb 12 19:45:20.291101 sshd[1303]: Accepted publickey for core from 139.178.68.195 port 41900 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:20.294055 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:20.300761 systemd[1]: Started session-3.scope. Feb 12 19:45:20.301253 systemd-logind[1172]: New session 3 of user core. Feb 12 19:45:20.361219 sshd[1303]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:20.365168 systemd[1]: Started sshd@3-164.90.146.133:22-139.178.68.195:41902.service. Feb 12 19:45:20.367612 systemd[1]: sshd@2-164.90.146.133:22-139.178.68.195:41900.service: Deactivated successfully. Feb 12 19:45:20.369135 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:45:20.369515 systemd-logind[1172]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:45:20.370549 systemd-logind[1172]: Removed session 3. Feb 12 19:45:20.419930 sshd[1308]: Accepted publickey for core from 139.178.68.195 port 41902 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:20.422317 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:20.428418 systemd[1]: Started session-4.scope. Feb 12 19:45:20.429124 systemd-logind[1172]: New session 4 of user core. Feb 12 19:45:20.500492 sshd[1308]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:20.505111 systemd[1]: Started sshd@4-164.90.146.133:22-139.178.68.195:41912.service. Feb 12 19:45:20.507733 systemd-logind[1172]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:45:20.508123 systemd[1]: sshd@3-164.90.146.133:22-139.178.68.195:41902.service: Deactivated successfully. Feb 12 19:45:20.509237 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:45:20.509839 systemd-logind[1172]: Removed session 4. Feb 12 19:45:20.567805 sshd[1315]: Accepted publickey for core from 139.178.68.195 port 41912 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:20.569909 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:20.576588 systemd-logind[1172]: New session 5 of user core. Feb 12 19:45:20.577210 systemd[1]: Started session-5.scope. Feb 12 19:45:20.650122 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:45:20.651075 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:45:21.226780 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:45:21.235506 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:45:21.235911 systemd[1]: Reached target network-online.target. Feb 12 19:45:21.237820 systemd[1]: Starting docker.service... Feb 12 19:45:21.285837 env[1338]: time="2024-02-12T19:45:21.285770245Z" level=info msg="Starting up" Feb 12 19:45:21.288509 env[1338]: time="2024-02-12T19:45:21.288450578Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:45:21.288509 env[1338]: time="2024-02-12T19:45:21.288490225Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:45:21.288509 env[1338]: time="2024-02-12T19:45:21.288511139Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:45:21.288509 env[1338]: time="2024-02-12T19:45:21.288522481Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:45:21.290567 env[1338]: time="2024-02-12T19:45:21.290519335Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:45:21.290567 env[1338]: time="2024-02-12T19:45:21.290549883Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:45:21.290567 env[1338]: time="2024-02-12T19:45:21.290567959Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:45:21.290567 env[1338]: time="2024-02-12T19:45:21.290577011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:45:21.298214 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3377651129-merged.mount: Deactivated successfully. Feb 12 19:45:21.353509 env[1338]: time="2024-02-12T19:45:21.353464708Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 19:45:21.353810 env[1338]: time="2024-02-12T19:45:21.353786629Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 19:45:21.354177 env[1338]: time="2024-02-12T19:45:21.354157278Z" level=info msg="Loading containers: start." Feb 12 19:45:21.508229 kernel: Initializing XFRM netlink socket Feb 12 19:45:21.550362 env[1338]: time="2024-02-12T19:45:21.550322023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:45:21.641389 systemd-networkd[1067]: docker0: Link UP Feb 12 19:45:21.658577 env[1338]: time="2024-02-12T19:45:21.658516981Z" level=info msg="Loading containers: done." Feb 12 19:45:21.672812 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3654950011-merged.mount: Deactivated successfully. Feb 12 19:45:21.681247 env[1338]: time="2024-02-12T19:45:21.681132420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:45:21.681843 env[1338]: time="2024-02-12T19:45:21.681808867Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:45:21.682113 env[1338]: time="2024-02-12T19:45:21.682091147Z" level=info msg="Daemon has completed initialization" Feb 12 19:45:21.720494 systemd[1]: Started docker.service. Feb 12 19:45:21.727835 env[1338]: time="2024-02-12T19:45:21.727767389Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:45:21.754974 systemd[1]: Starting coreos-metadata.service... Feb 12 19:45:21.807979 coreos-metadata[1455]: Feb 12 19:45:21.807 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:45:21.822632 coreos-metadata[1455]: Feb 12 19:45:21.822 INFO Fetch successful Feb 12 19:45:21.843521 systemd[1]: Finished coreos-metadata.service. Feb 12 19:45:21.863767 systemd[1]: Reloading. Feb 12 19:45:21.991419 /usr/lib/systemd/system-generators/torcx-generator[1491]: time="2024-02-12T19:45:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:21.992004 /usr/lib/systemd/system-generators/torcx-generator[1491]: time="2024-02-12T19:45:21Z" level=info msg="torcx already run" Feb 12 19:45:22.113176 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:22.113686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:22.137970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:22.236913 systemd[1]: Started kubelet.service. Feb 12 19:45:22.339590 kubelet[1542]: E0212 19:45:22.339492 1542 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:45:22.341746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:45:22.341997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:45:22.780616 env[1189]: time="2024-02-12T19:45:22.780542314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:45:23.399966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052503079.mount: Deactivated successfully. Feb 12 19:45:25.579187 env[1189]: time="2024-02-12T19:45:25.579123147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:25.581926 env[1189]: time="2024-02-12T19:45:25.581875402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:25.584095 env[1189]: time="2024-02-12T19:45:25.584039082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:25.586644 env[1189]: time="2024-02-12T19:45:25.586597724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:25.587595 env[1189]: time="2024-02-12T19:45:25.587553438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 19:45:25.603173 env[1189]: time="2024-02-12T19:45:25.603124389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:45:27.875574 env[1189]: time="2024-02-12T19:45:27.875519327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.881118 env[1189]: time="2024-02-12T19:45:27.881038900Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.885026 env[1189]: time="2024-02-12T19:45:27.884952175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.888687 env[1189]: time="2024-02-12T19:45:27.888599940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 19:45:27.890701 env[1189]: time="2024-02-12T19:45:27.887709684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.913093 env[1189]: time="2024-02-12T19:45:27.913022646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:45:29.869285 env[1189]: time="2024-02-12T19:45:29.869213147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:29.883602 env[1189]: time="2024-02-12T19:45:29.877170697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:29.883602 env[1189]: time="2024-02-12T19:45:29.882853902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:29.887684 env[1189]: time="2024-02-12T19:45:29.887599527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:29.890365 env[1189]: time="2024-02-12T19:45:29.889498069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 19:45:29.913792 env[1189]: time="2024-02-12T19:45:29.913737392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:45:31.724423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017610592.mount: Deactivated successfully. Feb 12 19:45:32.593373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:45:32.593760 systemd[1]: Stopped kubelet.service. Feb 12 19:45:32.612687 systemd[1]: Started kubelet.service. Feb 12 19:45:32.768772 env[1189]: time="2024-02-12T19:45:32.768702495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:32.775613 env[1189]: time="2024-02-12T19:45:32.775552552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:32.784682 kubelet[1576]: E0212 19:45:32.784518 1576 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:45:32.785816 env[1189]: time="2024-02-12T19:45:32.785760813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:32.790229 env[1189]: time="2024-02-12T19:45:32.790142866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:32.791075 env[1189]: time="2024-02-12T19:45:32.790983746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:45:32.800668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:45:32.801040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:45:32.818382 env[1189]: time="2024-02-12T19:45:32.818305954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:45:33.393950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480623099.mount: Deactivated successfully. Feb 12 19:45:33.408692 env[1189]: time="2024-02-12T19:45:33.408632536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.413887 env[1189]: time="2024-02-12T19:45:33.413823459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.418753 env[1189]: time="2024-02-12T19:45:33.418697681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.423447 env[1189]: time="2024-02-12T19:45:33.423384538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.424790 env[1189]: time="2024-02-12T19:45:33.424736662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:45:33.449720 env[1189]: time="2024-02-12T19:45:33.449640038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:45:34.672575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413936568.mount: Deactivated successfully. Feb 12 19:45:39.784242 env[1189]: time="2024-02-12T19:45:39.784176255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:39.788996 env[1189]: time="2024-02-12T19:45:39.788940766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:39.792498 env[1189]: time="2024-02-12T19:45:39.792438738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:39.797106 env[1189]: time="2024-02-12T19:45:39.797049328Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:39.798233 env[1189]: time="2024-02-12T19:45:39.798175802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 19:45:39.815531 env[1189]: time="2024-02-12T19:45:39.815482959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:45:40.365313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290719344.mount: Deactivated successfully. Feb 12 19:45:41.042396 env[1189]: time="2024-02-12T19:45:41.042321403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:41.045652 env[1189]: time="2024-02-12T19:45:41.045583114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:41.048616 env[1189]: time="2024-02-12T19:45:41.048557389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:41.058573 env[1189]: time="2024-02-12T19:45:41.053828230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 19:45:41.058573 env[1189]: time="2024-02-12T19:45:41.054208824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:43.052673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:45:43.053040 systemd[1]: Stopped kubelet.service. Feb 12 19:45:43.055702 systemd[1]: Started kubelet.service. Feb 12 19:45:43.178761 kubelet[1657]: E0212 19:45:43.178684 1657 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:45:43.181389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:45:43.181665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:45:45.060614 systemd[1]: Stopped kubelet.service. Feb 12 19:45:45.084944 systemd[1]: Reloading. Feb 12 19:45:45.182567 /usr/lib/systemd/system-generators/torcx-generator[1688]: time="2024-02-12T19:45:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:45.182598 /usr/lib/systemd/system-generators/torcx-generator[1688]: time="2024-02-12T19:45:45Z" level=info msg="torcx already run" Feb 12 19:45:45.301645 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:45.301687 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:45.323983 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:45.429530 systemd[1]: Started kubelet.service. Feb 12 19:45:45.519026 kubelet[1741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:45.519532 kubelet[1741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:45.519727 kubelet[1741]: I0212 19:45:45.519689 1741 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:45:45.521391 kubelet[1741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:45.521521 kubelet[1741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:45.828989 kubelet[1741]: I0212 19:45:45.828945 1741 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:45:45.829218 kubelet[1741]: I0212 19:45:45.829187 1741 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:45:45.829680 kubelet[1741]: I0212 19:45:45.829648 1741 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:45:45.834585 kubelet[1741]: E0212 19:45:45.834545 1741 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.90.146.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.834727 kubelet[1741]: I0212 19:45:45.834623 1741 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:45.838023 kubelet[1741]: I0212 19:45:45.837986 1741 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:45:45.838664 kubelet[1741]: I0212 19:45:45.838643 1741 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:45:45.838847 kubelet[1741]: I0212 19:45:45.838830 1741 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:45:45.839040 kubelet[1741]: I0212 19:45:45.839012 1741 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:45:45.839133 kubelet[1741]: I0212 19:45:45.839121 1741 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:45:45.839475 kubelet[1741]: I0212 19:45:45.839451 1741 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:45.843489 kubelet[1741]: I0212 19:45:45.843441 1741 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:45:45.843489 kubelet[1741]: I0212 19:45:45.843475 1741 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:45:45.843489 kubelet[1741]: I0212 19:45:45.843495 1741 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:45:45.843711 kubelet[1741]: I0212 19:45:45.843510 1741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:45:45.847134 kubelet[1741]: I0212 19:45:45.847085 1741 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:45:45.847469 kubelet[1741]: W0212 19:45:45.847452 1741 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:45:45.847839 kubelet[1741]: I0212 19:45:45.847822 1741 server.go:1186] "Started kubelet" Feb 12 19:45:45.847998 kubelet[1741]: W0212 19:45:45.847958 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.90.146.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-3-61711c62be&limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.848062 kubelet[1741]: E0212 19:45:45.848007 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.90.146.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-3-61711c62be&limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.850697 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:45:45.850848 kubelet[1741]: I0212 19:45:45.850822 1741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:45:45.852947 kubelet[1741]: W0212 19:45:45.852897 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.90.146.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.853164 kubelet[1741]: E0212 19:45:45.853148 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.90.146.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.855097 kubelet[1741]: I0212 19:45:45.855060 1741 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:45:45.856105 kubelet[1741]: I0212 19:45:45.856079 1741 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:45:45.858685 kubelet[1741]: E0212 19:45:45.858645 1741 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:45:45.858906 kubelet[1741]: E0212 19:45:45.858883 1741 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:45:45.859063 kubelet[1741]: I0212 19:45:45.858746 1741 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:45:45.859240 kubelet[1741]: I0212 19:45:45.858766 1741 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:45:45.859532 kubelet[1741]: E0212 19:45:45.859504 1741 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://164.90.146.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-3-61711c62be?timeout=10s": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.859812 kubelet[1741]: E0212 19:45:45.859675 1741 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bc7573f74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 847799668, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 847799668, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://164.90.146.133:6443/api/v1/namespaces/default/events": dial tcp 164.90.146.133:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:45:45.860506 kubelet[1741]: W0212 19:45:45.860468 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.90.146.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.860721 kubelet[1741]: E0212 19:45:45.860691 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.90.146.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.916126 kubelet[1741]: I0212 19:45:45.916079 1741 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:45:45.916479 kubelet[1741]: I0212 19:45:45.916445 1741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:45:45.916579 kubelet[1741]: I0212 19:45:45.916570 1741 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:45.920315 kubelet[1741]: I0212 19:45:45.920277 1741 policy_none.go:49] "None policy: Start" Feb 12 19:45:45.921547 kubelet[1741]: I0212 19:45:45.921512 1741 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:45:45.921547 kubelet[1741]: I0212 19:45:45.921552 1741 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:45:45.928599 kubelet[1741]: I0212 19:45:45.928558 1741 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:45:45.928856 kubelet[1741]: I0212 19:45:45.928834 1741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:45:45.940286 kubelet[1741]: E0212 19:45:45.940247 1741 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-3-61711c62be\" not found" Feb 12 19:45:45.953895 kubelet[1741]: I0212 19:45:45.953843 1741 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:45:45.960046 kubelet[1741]: I0212 19:45:45.959988 1741 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:45.960476 kubelet[1741]: E0212 19:45:45.960453 1741 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.146.133:6443/api/v1/nodes\": dial tcp 164.90.146.133:6443: connect: connection refused" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:45.983919 kubelet[1741]: I0212 19:45:45.983872 1741 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:45:45.983919 kubelet[1741]: I0212 19:45:45.983926 1741 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:45:45.984114 kubelet[1741]: I0212 19:45:45.983949 1741 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:45:45.984163 kubelet[1741]: E0212 19:45:45.984146 1741 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:45:45.984910 kubelet[1741]: W0212 19:45:45.984669 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.90.146.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:45.984910 kubelet[1741]: E0212 19:45:45.984722 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.90.146.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:46.061122 kubelet[1741]: E0212 19:45:46.061061 1741 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://164.90.146.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-3-61711c62be?timeout=10s": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:46.086283 kubelet[1741]: I0212 19:45:46.085261 1741 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:46.088043 kubelet[1741]: I0212 19:45:46.088018 1741 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:46.089615 kubelet[1741]: I0212 19:45:46.089483 1741 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:46.091851 kubelet[1741]: I0212 19:45:46.091807 1741 status_manager.go:698] "Failed to get status for pod" podUID=96b12bfcbbd43fcccda02bcf24562f56 pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:46.096358 kubelet[1741]: I0212 19:45:46.096310 1741 status_manager.go:698] "Failed to get status for pod" podUID=7c23f75c2f1aa0291f526a581efddf23 pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:46.098841 kubelet[1741]: I0212 19:45:46.098801 1741 status_manager.go:698] "Failed to get status for pod" podUID=f9d54430e17e4ab63868b2747e956f5b pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:46.162061 kubelet[1741]: I0212 19:45:46.162015 1741 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.162744 kubelet[1741]: E0212 19:45:46.162715 1741 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.146.133:6443/api/v1/nodes\": dial tcp 164.90.146.133:6443: connect: connection refused" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261381 kubelet[1741]: I0212 19:45:46.261328 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261381 kubelet[1741]: I0212 19:45:46.261384 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261648 kubelet[1741]: I0212 19:45:46.261407 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261648 kubelet[1741]: I0212 19:45:46.261428 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261648 kubelet[1741]: I0212 19:45:46.261450 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261648 kubelet[1741]: I0212 19:45:46.261472 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261648 kubelet[1741]: I0212 19:45:46.261493 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9d54430e17e4ab63868b2747e956f5b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-3-61711c62be\" (UID: \"f9d54430e17e4ab63868b2747e956f5b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261853 kubelet[1741]: I0212 19:45:46.261523 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.261853 kubelet[1741]: I0212 19:45:46.261550 1741 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.394478 kubelet[1741]: E0212 19:45:46.394323 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:46.395911 env[1189]: time="2024-02-12T19:45:46.395844410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-3-61711c62be,Uid:96b12bfcbbd43fcccda02bcf24562f56,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:46.396543 kubelet[1741]: E0212 19:45:46.396477 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:46.397516 env[1189]: time="2024-02-12T19:45:46.397440851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-3-61711c62be,Uid:7c23f75c2f1aa0291f526a581efddf23,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:46.400699 kubelet[1741]: E0212 19:45:46.400673 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:46.401850 env[1189]: time="2024-02-12T19:45:46.401512653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-3-61711c62be,Uid:f9d54430e17e4ab63868b2747e956f5b,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:46.462283 kubelet[1741]: E0212 19:45:46.462188 1741 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://164.90.146.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-3-61711c62be?timeout=10s": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:46.565002 kubelet[1741]: I0212 19:45:46.564908 1741 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:46.566079 kubelet[1741]: E0212 19:45:46.566051 1741 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.146.133:6443/api/v1/nodes\": dial tcp 164.90.146.133:6443: connect: connection refused" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:47.014550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536684372.mount: Deactivated successfully. Feb 12 19:45:47.016778 kubelet[1741]: W0212 19:45:47.016700 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.90.146.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-3-61711c62be&limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.016778 kubelet[1741]: E0212 19:45:47.016779 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.90.146.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-3-61711c62be&limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.022692 env[1189]: time="2024-02-12T19:45:47.022643194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.028992 env[1189]: time="2024-02-12T19:45:47.028918580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.030944 env[1189]: time="2024-02-12T19:45:47.030891030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.032435 env[1189]: time="2024-02-12T19:45:47.032384132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.034625 env[1189]: time="2024-02-12T19:45:47.034581389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.037349 env[1189]: time="2024-02-12T19:45:47.037292144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.038834 env[1189]: time="2024-02-12T19:45:47.038787586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.041736 env[1189]: time="2024-02-12T19:45:47.041691769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.043782 env[1189]: time="2024-02-12T19:45:47.043739964Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.046833 env[1189]: time="2024-02-12T19:45:47.046789266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.048513 env[1189]: time="2024-02-12T19:45:47.048469402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.050428 env[1189]: time="2024-02-12T19:45:47.050380729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:47.096791 env[1189]: time="2024-02-12T19:45:47.096610485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:47.097051 env[1189]: time="2024-02-12T19:45:47.096799325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:47.097051 env[1189]: time="2024-02-12T19:45:47.096819023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:47.097322 env[1189]: time="2024-02-12T19:45:47.097279704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff90040f3235c732897771a0df1e7011231df4e26219b6278eea1ce2abcea564 pid=1825 runtime=io.containerd.runc.v2 Feb 12 19:45:47.105643 env[1189]: time="2024-02-12T19:45:47.105551674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:47.105643 env[1189]: time="2024-02-12T19:45:47.105600162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:47.105643 env[1189]: time="2024-02-12T19:45:47.105612125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:47.106669 env[1189]: time="2024-02-12T19:45:47.106601233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d605cdbaa09a45a0b098b3f99857915e1983c45974625d9c177fcc8745310dec pid=1824 runtime=io.containerd.runc.v2 Feb 12 19:45:47.122668 env[1189]: time="2024-02-12T19:45:47.122577679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:47.122668 env[1189]: time="2024-02-12T19:45:47.122625030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:47.122904 env[1189]: time="2024-02-12T19:45:47.122640601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:47.122904 env[1189]: time="2024-02-12T19:45:47.122800120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2d4faee6d7926f6833f3e1271ea4b1b17666a6b424340eb02e550ad660f8860 pid=1855 runtime=io.containerd.runc.v2 Feb 12 19:45:47.159366 kubelet[1741]: W0212 19:45:47.159245 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.90.146.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.159366 kubelet[1741]: E0212 19:45:47.159334 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.90.146.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.232242 env[1189]: time="2024-02-12T19:45:47.232027687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-3-61711c62be,Uid:96b12bfcbbd43fcccda02bcf24562f56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff90040f3235c732897771a0df1e7011231df4e26219b6278eea1ce2abcea564\"" Feb 12 19:45:47.235088 kubelet[1741]: E0212 19:45:47.235045 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:47.239978 env[1189]: time="2024-02-12T19:45:47.239924026Z" level=info msg="CreateContainer within sandbox \"ff90040f3235c732897771a0df1e7011231df4e26219b6278eea1ce2abcea564\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:45:47.252291 env[1189]: time="2024-02-12T19:45:47.252239770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-3-61711c62be,Uid:7c23f75c2f1aa0291f526a581efddf23,Namespace:kube-system,Attempt:0,} returns sandbox id \"d605cdbaa09a45a0b098b3f99857915e1983c45974625d9c177fcc8745310dec\"" Feb 12 19:45:47.253505 kubelet[1741]: E0212 19:45:47.253286 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:47.256104 env[1189]: time="2024-02-12T19:45:47.255657597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-3-61711c62be,Uid:f9d54430e17e4ab63868b2747e956f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2d4faee6d7926f6833f3e1271ea4b1b17666a6b424340eb02e550ad660f8860\"" Feb 12 19:45:47.258746 env[1189]: time="2024-02-12T19:45:47.258692012Z" level=info msg="CreateContainer within sandbox \"d605cdbaa09a45a0b098b3f99857915e1983c45974625d9c177fcc8745310dec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:45:47.258987 kubelet[1741]: E0212 19:45:47.258965 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:47.263023 kubelet[1741]: E0212 19:45:47.262935 1741 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://164.90.146.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-3-61711c62be?timeout=10s": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.263574 env[1189]: time="2024-02-12T19:45:47.263292946Z" level=info msg="CreateContainer within sandbox \"b2d4faee6d7926f6833f3e1271ea4b1b17666a6b424340eb02e550ad660f8860\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:45:47.269922 kubelet[1741]: W0212 19:45:47.269011 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.90.146.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.269922 kubelet[1741]: E0212 19:45:47.269090 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.90.146.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.283156 env[1189]: time="2024-02-12T19:45:47.283085628Z" level=info msg="CreateContainer within sandbox \"ff90040f3235c732897771a0df1e7011231df4e26219b6278eea1ce2abcea564\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db50079d6669197fe8d126e55af50b3724231d07bc2fcff695c952d941b48897\"" Feb 12 19:45:47.285000 env[1189]: time="2024-02-12T19:45:47.284949208Z" level=info msg="StartContainer for \"db50079d6669197fe8d126e55af50b3724231d07bc2fcff695c952d941b48897\"" Feb 12 19:45:47.289085 kubelet[1741]: W0212 19:45:47.288922 1741 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.90.146.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.289085 kubelet[1741]: E0212 19:45:47.289016 1741 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.90.146.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.293260 env[1189]: time="2024-02-12T19:45:47.293090112Z" level=info msg="CreateContainer within sandbox \"d605cdbaa09a45a0b098b3f99857915e1983c45974625d9c177fcc8745310dec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1784cad2b129855cf3dd43bb142f82f0921d01d8c4d593b9d3faa4b7c0b744d3\"" Feb 12 19:45:47.294598 env[1189]: time="2024-02-12T19:45:47.294536382Z" level=info msg="StartContainer for \"1784cad2b129855cf3dd43bb142f82f0921d01d8c4d593b9d3faa4b7c0b744d3\"" Feb 12 19:45:47.298279 env[1189]: time="2024-02-12T19:45:47.298215900Z" level=info msg="CreateContainer within sandbox \"b2d4faee6d7926f6833f3e1271ea4b1b17666a6b424340eb02e550ad660f8860\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6496ab95150abedfe153abbeec2bcea3c0a37fb243a0333a9cdfea42b4e5c18\"" Feb 12 19:45:47.299156 env[1189]: time="2024-02-12T19:45:47.299106599Z" level=info msg="StartContainer for \"b6496ab95150abedfe153abbeec2bcea3c0a37fb243a0333a9cdfea42b4e5c18\"" Feb 12 19:45:47.374808 kubelet[1741]: I0212 19:45:47.374763 1741 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:47.375351 kubelet[1741]: E0212 19:45:47.375295 1741 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.146.133:6443/api/v1/nodes\": dial tcp 164.90.146.133:6443: connect: connection refused" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:47.453274 env[1189]: time="2024-02-12T19:45:47.449169679Z" level=info msg="StartContainer for \"db50079d6669197fe8d126e55af50b3724231d07bc2fcff695c952d941b48897\" returns successfully" Feb 12 19:45:47.527744 env[1189]: time="2024-02-12T19:45:47.527514343Z" level=info msg="StartContainer for \"b6496ab95150abedfe153abbeec2bcea3c0a37fb243a0333a9cdfea42b4e5c18\" returns successfully" Feb 12 19:45:47.529327 env[1189]: time="2024-02-12T19:45:47.529275126Z" level=info msg="StartContainer for \"1784cad2b129855cf3dd43bb142f82f0921d01d8c4d593b9d3faa4b7c0b744d3\" returns successfully" Feb 12 19:45:47.903478 kubelet[1741]: E0212 19:45:47.903323 1741 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.90.146.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.90.146.133:6443: connect: connection refused Feb 12 19:45:47.992399 kubelet[1741]: E0212 19:45:47.992357 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:47.993011 kubelet[1741]: I0212 19:45:47.992981 1741 status_manager.go:698] "Failed to get status for pod" podUID=f9d54430e17e4ab63868b2747e956f5b pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:47.995182 kubelet[1741]: E0212 19:45:47.995134 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:47.995640 kubelet[1741]: I0212 19:45:47.995614 1741 status_manager.go:698] "Failed to get status for pod" podUID=7c23f75c2f1aa0291f526a581efddf23 pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:48.006587 kubelet[1741]: E0212 19:45:48.006553 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:48.044798 kubelet[1741]: I0212 19:45:48.044753 1741 status_manager.go:698] "Failed to get status for pod" podUID=96b12bfcbbd43fcccda02bcf24562f56 pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" err="Get \"https://164.90.146.133:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-3-61711c62be\": dial tcp 164.90.146.133:6443: connect: connection refused" Feb 12 19:45:48.977483 kubelet[1741]: I0212 19:45:48.977443 1741 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:49.002309 kubelet[1741]: E0212 19:45:49.002275 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:49.002900 kubelet[1741]: E0212 19:45:49.002878 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:49.003078 kubelet[1741]: E0212 19:45:49.003058 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:50.004016 kubelet[1741]: E0212 19:45:50.003969 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:50.004868 kubelet[1741]: E0212 19:45:50.004843 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:51.626340 kubelet[1741]: E0212 19:45:51.626267 1741 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-3-61711c62be\" not found" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:51.692192 kubelet[1741]: I0212 19:45:51.692111 1741 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:51.855885 kubelet[1741]: I0212 19:45:51.855819 1741 apiserver.go:52] "Watching apiserver" Feb 12 19:45:51.960682 kubelet[1741]: I0212 19:45:51.960506 1741 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:52.013691 kubelet[1741]: I0212 19:45:52.013605 1741 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:45:52.731079 kubelet[1741]: E0212 19:45:52.730923 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bc7573f74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 847799668, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 847799668, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:52.803487 kubelet[1741]: E0212 19:45:52.803323 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bc8001a77", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 858865783, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 858865783, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:52.868816 kubelet[1741]: E0212 19:45:52.868675 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5bdb4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915210571, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915210571, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:52.935413 kubelet[1741]: E0212 19:45:52.935219 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5c2aa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915230886, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915230886, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.002009 kubelet[1741]: E0212 19:45:53.001676 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5c3d41", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915235649, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915235649, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.064571 kubelet[1741]: E0212 19:45:53.064339 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcc3b61fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 929859581, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 929859581, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.132878 kubelet[1741]: E0212 19:45:53.132681 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5bdb4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915210571, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 959943941, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.205139 kubelet[1741]: E0212 19:45:53.204965 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5c2aa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915230886, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 959952604, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.290619 kubelet[1741]: E0212 19:45:53.290350 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5c3d41", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915235649, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 959956521, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.552676 kubelet[1741]: E0212 19:45:53.552322 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5bdb4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915210571, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 46, 87915230, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:53.952059 kubelet[1741]: E0212 19:45:53.942832 1741 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-3-61711c62be.17b3352bcb5c2aa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-3-61711c62be", UID:"ci-3510.3.2-3-61711c62be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-3-61711c62be status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-3-61711c62be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 45, 915230886, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 46, 87929835, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:55.563005 update_engine[1174]: I0212 19:45:55.562922 1174 update_attempter.cc:509] Updating boot flags... Feb 12 19:45:55.628233 kubelet[1741]: E0212 19:45:55.628101 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:55.748870 systemd[1]: Reloading. Feb 12 19:45:56.076787 kubelet[1741]: E0212 19:45:56.070051 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:56.093266 /usr/lib/systemd/system-generators/torcx-generator[2078]: time="2024-02-12T19:45:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:56.093309 /usr/lib/systemd/system-generators/torcx-generator[2078]: time="2024-02-12T19:45:56Z" level=info msg="torcx already run" Feb 12 19:45:56.329694 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:56.330475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:56.371728 kubelet[1741]: I0212 19:45:56.371684 1741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" podStartSLOduration=1.371559864 pod.CreationTimestamp="2024-02-12 19:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:56.129619564 +0000 UTC m=+10.692911632" watchObservedRunningTime="2024-02-12 19:45:56.371559864 +0000 UTC m=+10.934851929" Feb 12 19:45:56.372748 kubelet[1741]: E0212 19:45:56.372711 1741 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:56.379600 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:56.589083 systemd[1]: Stopping kubelet.service... Feb 12 19:45:56.590473 kubelet[1741]: I0212 19:45:56.590431 1741 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:56.607562 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:45:56.608233 systemd[1]: Stopped kubelet.service. Feb 12 19:45:56.613909 systemd[1]: Started kubelet.service. Feb 12 19:45:56.759984 kubelet[2129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:56.761262 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:56.761650 kubelet[2129]: I0212 19:45:56.761572 2129 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:45:56.767511 kubelet[2129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:56.767730 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:56.775656 kubelet[2129]: I0212 19:45:56.775611 2129 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:45:56.776780 kubelet[2129]: I0212 19:45:56.776742 2129 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:45:56.777365 kubelet[2129]: I0212 19:45:56.777336 2129 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:45:56.780881 kubelet[2129]: I0212 19:45:56.780844 2129 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:45:56.782319 kubelet[2129]: I0212 19:45:56.782271 2129 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:56.789012 kubelet[2129]: I0212 19:45:56.788969 2129 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:45:56.790397 kubelet[2129]: I0212 19:45:56.790368 2129 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:45:56.790787 kubelet[2129]: I0212 19:45:56.790765 2129 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:45:56.791000 kubelet[2129]: I0212 19:45:56.790985 2129 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:45:56.791179 kubelet[2129]: I0212 19:45:56.791163 2129 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:45:56.791379 kubelet[2129]: I0212 19:45:56.791365 2129 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:56.800650 sudo[2142]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:45:56.801134 sudo[2142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:45:56.815277 kubelet[2129]: I0212 19:45:56.814277 2129 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:45:56.815277 kubelet[2129]: I0212 19:45:56.814343 2129 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:45:56.815277 kubelet[2129]: I0212 19:45:56.814385 2129 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:45:56.815277 kubelet[2129]: I0212 19:45:56.814423 2129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:45:56.829926 kubelet[2129]: I0212 19:45:56.824462 2129 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:45:56.829926 kubelet[2129]: I0212 19:45:56.826438 2129 server.go:1186] "Started kubelet" Feb 12 19:45:56.837146 kubelet[2129]: I0212 19:45:56.837100 2129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:45:56.849002 kubelet[2129]: I0212 19:45:56.848755 2129 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:45:56.851548 kubelet[2129]: I0212 19:45:56.850593 2129 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:45:56.860048 kubelet[2129]: I0212 19:45:56.859996 2129 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:45:56.868055 kubelet[2129]: I0212 19:45:56.868004 2129 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:45:56.872424 kubelet[2129]: E0212 19:45:56.872385 2129 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:45:56.872762 kubelet[2129]: E0212 19:45:56.872725 2129 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:45:56.966772 kubelet[2129]: I0212 19:45:56.965845 2129 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:56.982473 kubelet[2129]: I0212 19:45:56.980323 2129 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:45:57.021669 kubelet[2129]: I0212 19:45:57.021622 2129 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.022555 kubelet[2129]: I0212 19:45:57.021735 2129 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.114168 kubelet[2129]: I0212 19:45:57.109159 2129 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:45:57.114168 kubelet[2129]: I0212 19:45:57.109240 2129 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:45:57.114168 kubelet[2129]: I0212 19:45:57.109272 2129 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:45:57.114168 kubelet[2129]: E0212 19:45:57.109342 2129 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:45:57.149716 kubelet[2129]: I0212 19:45:57.149628 2129 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:45:57.149716 kubelet[2129]: I0212 19:45:57.149660 2129 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:45:57.149716 kubelet[2129]: I0212 19:45:57.149689 2129 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:57.150002 kubelet[2129]: I0212 19:45:57.149959 2129 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:45:57.150002 kubelet[2129]: I0212 19:45:57.149981 2129 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:45:57.150002 kubelet[2129]: I0212 19:45:57.149991 2129 policy_none.go:49] "None policy: Start" Feb 12 19:45:57.152386 kubelet[2129]: I0212 19:45:57.152338 2129 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:45:57.152386 kubelet[2129]: I0212 19:45:57.152391 2129 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:45:57.152703 kubelet[2129]: I0212 19:45:57.152658 2129 state_mem.go:75] "Updated machine memory state" Feb 12 19:45:57.155368 kubelet[2129]: I0212 19:45:57.155173 2129 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:45:57.166513 kubelet[2129]: I0212 19:45:57.166348 2129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:45:57.210452 kubelet[2129]: I0212 19:45:57.210353 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:57.210874 kubelet[2129]: I0212 19:45:57.210526 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:57.210874 kubelet[2129]: I0212 19:45:57.210586 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:57.228285 kubelet[2129]: E0212 19:45:57.226529 2129 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-3-61711c62be\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.233983 kubelet[2129]: E0212 19:45:57.233927 2129 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-3-61711c62be\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.282919 kubelet[2129]: I0212 19:45:57.282863 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.283405 kubelet[2129]: I0212 19:45:57.283362 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.283692 kubelet[2129]: I0212 19:45:57.283671 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.284243 kubelet[2129]: I0212 19:45:57.284220 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.284510 kubelet[2129]: I0212 19:45:57.284487 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.284771 kubelet[2129]: I0212 19:45:57.284753 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.284986 kubelet[2129]: I0212 19:45:57.284936 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9d54430e17e4ab63868b2747e956f5b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-3-61711c62be\" (UID: \"f9d54430e17e4ab63868b2747e956f5b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.285214 kubelet[2129]: I0212 19:45:57.285184 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96b12bfcbbd43fcccda02bcf24562f56-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-3-61711c62be\" (UID: \"96b12bfcbbd43fcccda02bcf24562f56\") " pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.285439 kubelet[2129]: I0212 19:45:57.285421 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c23f75c2f1aa0291f526a581efddf23-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" (UID: \"7c23f75c2f1aa0291f526a581efddf23\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:57.530873 kubelet[2129]: E0212 19:45:57.530816 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:57.543464 kubelet[2129]: E0212 19:45:57.543404 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:57.543671 kubelet[2129]: E0212 19:45:57.543561 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:57.818969 kubelet[2129]: I0212 19:45:57.818806 2129 apiserver.go:52] "Watching apiserver" Feb 12 19:45:57.861658 sudo[2142]: pam_unix(sudo:session): session closed for user root Feb 12 19:45:57.873113 kubelet[2129]: I0212 19:45:57.873039 2129 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:57.890969 kubelet[2129]: I0212 19:45:57.890922 2129 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:45:58.190519 kubelet[2129]: E0212 19:45:58.190313 2129 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-3-61711c62be\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" Feb 12 19:45:58.191687 kubelet[2129]: E0212 19:45:58.191645 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:58.246175 kubelet[2129]: E0212 19:45:58.246132 2129 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-3-61711c62be\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" Feb 12 19:45:58.246811 kubelet[2129]: E0212 19:45:58.246786 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:58.448635 kubelet[2129]: E0212 19:45:58.448465 2129 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-3-61711c62be\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-3-61711c62be" Feb 12 19:45:58.449850 kubelet[2129]: E0212 19:45:58.449813 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:59.174716 kubelet[2129]: E0212 19:45:59.174686 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:59.178474 kubelet[2129]: E0212 19:45:59.178430 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:59.183635 kubelet[2129]: E0212 19:45:59.183583 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:45:59.229713 kubelet[2129]: I0212 19:45:59.229662 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-3-61711c62be" podStartSLOduration=2.229541066 pod.CreationTimestamp="2024-02-12 19:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:58.834707275 +0000 UTC m=+2.211070381" watchObservedRunningTime="2024-02-12 19:45:59.229541066 +0000 UTC m=+2.605904174" Feb 12 19:45:59.828849 kubelet[2129]: I0212 19:45:59.828799 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-3-61711c62be" podStartSLOduration=3.828704139 pod.CreationTimestamp="2024-02-12 19:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:59.230886301 +0000 UTC m=+2.607249408" watchObservedRunningTime="2024-02-12 19:45:59.828704139 +0000 UTC m=+3.205067247" Feb 12 19:46:00.176162 kubelet[2129]: E0212 19:46:00.176039 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:00.178656 kubelet[2129]: E0212 19:46:00.177508 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:00.423502 sudo[1321]: pam_unix(sudo:session): session closed for user root Feb 12 19:46:00.429465 sshd[1315]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:00.434021 systemd-logind[1172]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:46:00.434231 systemd[1]: sshd@4-164.90.146.133:22-139.178.68.195:41912.service: Deactivated successfully. Feb 12 19:46:00.435129 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:46:00.436099 systemd-logind[1172]: Removed session 5. Feb 12 19:46:00.974919 kubelet[2129]: E0212 19:46:00.974877 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:01.179149 kubelet[2129]: E0212 19:46:01.179103 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:07.783355 kubelet[2129]: I0212 19:46:07.783318 2129 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:46:07.784888 env[1189]: time="2024-02-12T19:46:07.784831140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:46:07.785768 kubelet[2129]: I0212 19:46:07.785739 2129 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:46:08.421360 kubelet[2129]: I0212 19:46:08.421320 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:08.432421 kubelet[2129]: W0212 19:46:08.432348 2129 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-3-61711c62be" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-3-61711c62be' and this object Feb 12 19:46:08.432421 kubelet[2129]: E0212 19:46:08.432419 2129 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-3-61711c62be" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-3-61711c62be' and this object Feb 12 19:46:08.433023 kubelet[2129]: W0212 19:46:08.432997 2129 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-3-61711c62be" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-3-61711c62be' and this object Feb 12 19:46:08.433123 kubelet[2129]: E0212 19:46:08.433028 2129 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-3-61711c62be" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-3-61711c62be' and this object Feb 12 19:46:08.448355 kubelet[2129]: I0212 19:46:08.448313 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:08.473056 kubelet[2129]: I0212 19:46:08.473021 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2c77d51-2a84-463c-bad3-fa5db60611f1-kube-proxy\") pod \"kube-proxy-2rxrn\" (UID: \"d2c77d51-2a84-463c-bad3-fa5db60611f1\") " pod="kube-system/kube-proxy-2rxrn" Feb 12 19:46:08.473295 kubelet[2129]: I0212 19:46:08.473081 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rqk\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-kube-api-access-98rqk\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473295 kubelet[2129]: I0212 19:46:08.473106 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-config-path\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473295 kubelet[2129]: I0212 19:46:08.473141 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhj4p\" (UniqueName: \"kubernetes.io/projected/d2c77d51-2a84-463c-bad3-fa5db60611f1-kube-api-access-fhj4p\") pod \"kube-proxy-2rxrn\" (UID: \"d2c77d51-2a84-463c-bad3-fa5db60611f1\") " pod="kube-system/kube-proxy-2rxrn" Feb 12 19:46:08.473295 kubelet[2129]: I0212 19:46:08.473162 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-lib-modules\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473295 kubelet[2129]: I0212 19:46:08.473180 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-net\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473215 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-kernel\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473238 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-cgroup\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473339 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2c77d51-2a84-463c-bad3-fa5db60611f1-xtables-lock\") pod \"kube-proxy-2rxrn\" (UID: \"d2c77d51-2a84-463c-bad3-fa5db60611f1\") " pod="kube-system/kube-proxy-2rxrn" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473401 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-bpf-maps\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473437 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cni-path\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473521 kubelet[2129]: I0212 19:46:08.473482 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hubble-tls\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473502 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2c77d51-2a84-463c-bad3-fa5db60611f1-lib-modules\") pod \"kube-proxy-2rxrn\" (UID: \"d2c77d51-2a84-463c-bad3-fa5db60611f1\") " pod="kube-system/kube-proxy-2rxrn" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473531 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-run\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473577 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-etc-cni-netd\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473605 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fe7cce-2177-4f47-8ea3-871da42fdb33-clustermesh-secrets\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473649 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hostproc\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.473701 kubelet[2129]: I0212 19:46:08.473685 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-xtables-lock\") pod \"cilium-fw6zt\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " pod="kube-system/cilium-fw6zt" Feb 12 19:46:08.766215 kubelet[2129]: I0212 19:46:08.766157 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:08.877474 kubelet[2129]: I0212 19:46:08.877350 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhc2\" (UniqueName: \"kubernetes.io/projected/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-kube-api-access-nvhc2\") pod \"cilium-operator-f59cbd8c6-9h9mq\" (UID: \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\") " pod="kube-system/cilium-operator-f59cbd8c6-9h9mq" Feb 12 19:46:08.878048 kubelet[2129]: I0212 19:46:08.877520 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-9h9mq\" (UID: \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\") " pod="kube-system/cilium-operator-f59cbd8c6-9h9mq" Feb 12 19:46:09.578373 kubelet[2129]: E0212 19:46:09.578289 2129 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:09.578586 kubelet[2129]: E0212 19:46:09.578439 2129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2c77d51-2a84-463c-bad3-fa5db60611f1-kube-proxy podName:d2c77d51-2a84-463c-bad3-fa5db60611f1 nodeName:}" failed. No retries permitted until 2024-02-12 19:46:10.078407412 +0000 UTC m=+13.454770516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d2c77d51-2a84-463c-bad3-fa5db60611f1-kube-proxy") pod "kube-proxy-2rxrn" (UID: "d2c77d51-2a84-463c-bad3-fa5db60611f1") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:09.652302 kubelet[2129]: E0212 19:46:09.652262 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:09.654910 env[1189]: time="2024-02-12T19:46:09.654815656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fw6zt,Uid:d1fe7cce-2177-4f47-8ea3-871da42fdb33,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:09.670067 kubelet[2129]: E0212 19:46:09.669972 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:09.672334 env[1189]: time="2024-02-12T19:46:09.672234773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9h9mq,Uid:30c12b57-fb52-43ad-bcca-cfa14dd7c4f1,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:09.697466 env[1189]: time="2024-02-12T19:46:09.697373669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:09.697735 env[1189]: time="2024-02-12T19:46:09.697445712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:09.697862 env[1189]: time="2024-02-12T19:46:09.697834628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:09.698325 env[1189]: time="2024-02-12T19:46:09.698276229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145 pid=2233 runtime=io.containerd.runc.v2 Feb 12 19:46:09.723656 env[1189]: time="2024-02-12T19:46:09.723396352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:09.723995 env[1189]: time="2024-02-12T19:46:09.723607967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:09.723995 env[1189]: time="2024-02-12T19:46:09.723654896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:09.724660 env[1189]: time="2024-02-12T19:46:09.724387034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b pid=2257 runtime=io.containerd.runc.v2 Feb 12 19:46:09.788701 env[1189]: time="2024-02-12T19:46:09.788641387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fw6zt,Uid:d1fe7cce-2177-4f47-8ea3-871da42fdb33,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\"" Feb 12 19:46:09.789508 kubelet[2129]: E0212 19:46:09.789470 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:09.791985 env[1189]: time="2024-02-12T19:46:09.791928825Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:46:09.849244 env[1189]: time="2024-02-12T19:46:09.849082791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9h9mq,Uid:30c12b57-fb52-43ad-bcca-cfa14dd7c4f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\"" Feb 12 19:46:09.851557 kubelet[2129]: E0212 19:46:09.851344 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:10.224636 kubelet[2129]: E0212 19:46:10.224520 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:10.225840 env[1189]: time="2024-02-12T19:46:10.225796211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rxrn,Uid:d2c77d51-2a84-463c-bad3-fa5db60611f1,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:10.248800 env[1189]: time="2024-02-12T19:46:10.248476672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:10.248800 env[1189]: time="2024-02-12T19:46:10.248530818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:10.248800 env[1189]: time="2024-02-12T19:46:10.248542094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:10.249080 env[1189]: time="2024-02-12T19:46:10.249006783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7799f13bf4f6fae913fe9e0153da2298da63ef09fd536689a32169af3c81239a pid=2319 runtime=io.containerd.runc.v2 Feb 12 19:46:10.310354 env[1189]: time="2024-02-12T19:46:10.310293684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2rxrn,Uid:d2c77d51-2a84-463c-bad3-fa5db60611f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7799f13bf4f6fae913fe9e0153da2298da63ef09fd536689a32169af3c81239a\"" Feb 12 19:46:10.312545 kubelet[2129]: E0212 19:46:10.312285 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:10.316643 env[1189]: time="2024-02-12T19:46:10.316587326Z" level=info msg="CreateContainer within sandbox \"7799f13bf4f6fae913fe9e0153da2298da63ef09fd536689a32169af3c81239a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:46:10.342510 env[1189]: time="2024-02-12T19:46:10.342397548Z" level=info msg="CreateContainer within sandbox \"7799f13bf4f6fae913fe9e0153da2298da63ef09fd536689a32169af3c81239a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24c291e6271c18bda22223d71b0bd07f7cb6e59daa2329db32796209aa56c698\"" Feb 12 19:46:10.345719 env[1189]: time="2024-02-12T19:46:10.345662744Z" level=info msg="StartContainer for \"24c291e6271c18bda22223d71b0bd07f7cb6e59daa2329db32796209aa56c698\"" Feb 12 19:46:10.436827 env[1189]: time="2024-02-12T19:46:10.436760100Z" level=info msg="StartContainer for \"24c291e6271c18bda22223d71b0bd07f7cb6e59daa2329db32796209aa56c698\" returns successfully" Feb 12 19:46:11.207242 kubelet[2129]: E0212 19:46:11.207037 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:11.221050 kubelet[2129]: I0212 19:46:11.221009 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2rxrn" podStartSLOduration=3.220964221 pod.CreationTimestamp="2024-02-12 19:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:11.220468408 +0000 UTC m=+14.596831516" watchObservedRunningTime="2024-02-12 19:46:11.220964221 +0000 UTC m=+14.597327327" Feb 12 19:46:12.209221 kubelet[2129]: E0212 19:46:12.209091 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:16.078569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717885910.mount: Deactivated successfully. Feb 12 19:46:21.884930 env[1189]: time="2024-02-12T19:46:21.884821126Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.890589 env[1189]: time="2024-02-12T19:46:21.890495162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.895741 env[1189]: time="2024-02-12T19:46:21.895650059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.897820 env[1189]: time="2024-02-12T19:46:21.896788839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:46:21.909432 env[1189]: time="2024-02-12T19:46:21.902653297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:46:21.909432 env[1189]: time="2024-02-12T19:46:21.904765442Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:46:21.942083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846227713.mount: Deactivated successfully. Feb 12 19:46:21.957606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869055464.mount: Deactivated successfully. Feb 12 19:46:21.967124 env[1189]: time="2024-02-12T19:46:21.967050115Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\"" Feb 12 19:46:21.972318 env[1189]: time="2024-02-12T19:46:21.970079583Z" level=info msg="StartContainer for \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\"" Feb 12 19:46:22.094194 env[1189]: time="2024-02-12T19:46:22.094119753Z" level=info msg="StartContainer for \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\" returns successfully" Feb 12 19:46:22.258800 kubelet[2129]: E0212 19:46:22.258761 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:22.286029 env[1189]: time="2024-02-12T19:46:22.285017901Z" level=info msg="shim disconnected" id=4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43 Feb 12 19:46:22.286029 env[1189]: time="2024-02-12T19:46:22.285079589Z" level=warning msg="cleaning up after shim disconnected" id=4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43 namespace=k8s.io Feb 12 19:46:22.286029 env[1189]: time="2024-02-12T19:46:22.285101710Z" level=info msg="cleaning up dead shim" Feb 12 19:46:22.321404 env[1189]: time="2024-02-12T19:46:22.321322237Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2544 runtime=io.containerd.runc.v2\n" Feb 12 19:46:22.938933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43-rootfs.mount: Deactivated successfully. Feb 12 19:46:23.263573 kubelet[2129]: E0212 19:46:23.263457 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:23.291405 env[1189]: time="2024-02-12T19:46:23.289095937Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:46:23.368991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635933625.mount: Deactivated successfully. Feb 12 19:46:23.377892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420390324.mount: Deactivated successfully. Feb 12 19:46:23.388907 env[1189]: time="2024-02-12T19:46:23.388831475Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\"" Feb 12 19:46:23.392067 env[1189]: time="2024-02-12T19:46:23.390507405Z" level=info msg="StartContainer for \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\"" Feb 12 19:46:23.501466 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:46:23.501917 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:46:23.502417 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:46:23.508029 env[1189]: time="2024-02-12T19:46:23.507959193Z" level=info msg="StartContainer for \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\" returns successfully" Feb 12 19:46:23.509991 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:46:23.538622 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:46:23.571554 env[1189]: time="2024-02-12T19:46:23.571481173Z" level=info msg="shim disconnected" id=fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e Feb 12 19:46:23.572140 env[1189]: time="2024-02-12T19:46:23.572094204Z" level=warning msg="cleaning up after shim disconnected" id=fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e namespace=k8s.io Feb 12 19:46:23.572331 env[1189]: time="2024-02-12T19:46:23.572307108Z" level=info msg="cleaning up dead shim" Feb 12 19:46:23.594649 env[1189]: time="2024-02-12T19:46:23.594593468Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2613 runtime=io.containerd.runc.v2\n" Feb 12 19:46:23.937337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078639457.mount: Deactivated successfully. Feb 12 19:46:24.269429 kubelet[2129]: E0212 19:46:24.269391 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:24.296173 env[1189]: time="2024-02-12T19:46:24.293908630Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:46:24.372299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694239634.mount: Deactivated successfully. Feb 12 19:46:24.381791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953077949.mount: Deactivated successfully. Feb 12 19:46:24.392981 env[1189]: time="2024-02-12T19:46:24.392921356Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\"" Feb 12 19:46:24.395443 env[1189]: time="2024-02-12T19:46:24.395402193Z" level=info msg="StartContainer for \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\"" Feb 12 19:46:24.408986 env[1189]: time="2024-02-12T19:46:24.408937682Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:24.410358 env[1189]: time="2024-02-12T19:46:24.410318784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:24.412261 env[1189]: time="2024-02-12T19:46:24.412226467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:24.412782 env[1189]: time="2024-02-12T19:46:24.412740761Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:46:24.417998 env[1189]: time="2024-02-12T19:46:24.417920866Z" level=info msg="CreateContainer within sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:46:24.455422 env[1189]: time="2024-02-12T19:46:24.455309353Z" level=info msg="CreateContainer within sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\"" Feb 12 19:46:24.458900 env[1189]: time="2024-02-12T19:46:24.458828181Z" level=info msg="StartContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\"" Feb 12 19:46:24.506530 env[1189]: time="2024-02-12T19:46:24.506468702Z" level=info msg="StartContainer for \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\" returns successfully" Feb 12 19:46:24.556674 env[1189]: time="2024-02-12T19:46:24.555820667Z" level=info msg="shim disconnected" id=9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076 Feb 12 19:46:24.557049 env[1189]: time="2024-02-12T19:46:24.557000354Z" level=warning msg="cleaning up after shim disconnected" id=9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076 namespace=k8s.io Feb 12 19:46:24.557174 env[1189]: time="2024-02-12T19:46:24.557157805Z" level=info msg="cleaning up dead shim" Feb 12 19:46:24.573936 env[1189]: time="2024-02-12T19:46:24.573868766Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2696 runtime=io.containerd.runc.v2\n" Feb 12 19:46:24.598336 env[1189]: time="2024-02-12T19:46:24.598274122Z" level=info msg="StartContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" returns successfully" Feb 12 19:46:25.273219 kubelet[2129]: E0212 19:46:25.273173 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:25.283547 kubelet[2129]: E0212 19:46:25.283499 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:25.286174 env[1189]: time="2024-02-12T19:46:25.286121302Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:46:25.314530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339789631.mount: Deactivated successfully. Feb 12 19:46:25.323702 env[1189]: time="2024-02-12T19:46:25.323645142Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\"" Feb 12 19:46:25.324967 env[1189]: time="2024-02-12T19:46:25.324931422Z" level=info msg="StartContainer for \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\"" Feb 12 19:46:25.501254 kubelet[2129]: I0212 19:46:25.501185 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-9h9mq" podStartSLOduration=-9.223372019353634e+09 pod.CreationTimestamp="2024-02-12 19:46:08 +0000 UTC" firstStartedPulling="2024-02-12 19:46:09.852911429 +0000 UTC m=+13.229274513" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:25.354344034 +0000 UTC m=+28.730707130" watchObservedRunningTime="2024-02-12 19:46:25.501141934 +0000 UTC m=+28.877505039" Feb 12 19:46:25.538621 env[1189]: time="2024-02-12T19:46:25.538490375Z" level=info msg="StartContainer for \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\" returns successfully" Feb 12 19:46:25.604635 env[1189]: time="2024-02-12T19:46:25.604562302Z" level=info msg="shim disconnected" id=9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459 Feb 12 19:46:25.604635 env[1189]: time="2024-02-12T19:46:25.604631602Z" level=warning msg="cleaning up after shim disconnected" id=9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459 namespace=k8s.io Feb 12 19:46:25.604635 env[1189]: time="2024-02-12T19:46:25.604646926Z" level=info msg="cleaning up dead shim" Feb 12 19:46:25.635842 env[1189]: time="2024-02-12T19:46:25.635762628Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2761 runtime=io.containerd.runc.v2\n" Feb 12 19:46:25.937603 systemd[1]: run-containerd-runc-k8s.io-9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459-runc.AJo5ni.mount: Deactivated successfully. Feb 12 19:46:25.938586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459-rootfs.mount: Deactivated successfully. Feb 12 19:46:26.290021 kubelet[2129]: E0212 19:46:26.289982 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:26.290911 kubelet[2129]: E0212 19:46:26.290870 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:26.296269 env[1189]: time="2024-02-12T19:46:26.296230676Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:46:26.348908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019695451.mount: Deactivated successfully. Feb 12 19:46:26.360209 env[1189]: time="2024-02-12T19:46:26.360082198Z" level=info msg="CreateContainer within sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\"" Feb 12 19:46:26.364984 env[1189]: time="2024-02-12T19:46:26.364899674Z" level=info msg="StartContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\"" Feb 12 19:46:26.486943 env[1189]: time="2024-02-12T19:46:26.486868509Z" level=info msg="StartContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" returns successfully" Feb 12 19:46:26.750779 kubelet[2129]: I0212 19:46:26.750731 2129 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:46:26.785050 kubelet[2129]: I0212 19:46:26.784982 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:26.795833 kubelet[2129]: I0212 19:46:26.794840 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:26.903529 kubelet[2129]: I0212 19:46:26.903492 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gc5z\" (UniqueName: \"kubernetes.io/projected/07baf74a-f5d0-45f0-9fa3-094dccc746da-kube-api-access-7gc5z\") pod \"coredns-787d4945fb-fzwdm\" (UID: \"07baf74a-f5d0-45f0-9fa3-094dccc746da\") " pod="kube-system/coredns-787d4945fb-fzwdm" Feb 12 19:46:26.904042 kubelet[2129]: I0212 19:46:26.904015 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6801ff7-3e30-4764-a3ce-f8254f7905b4-config-volume\") pod \"coredns-787d4945fb-dml5l\" (UID: \"c6801ff7-3e30-4764-a3ce-f8254f7905b4\") " pod="kube-system/coredns-787d4945fb-dml5l" Feb 12 19:46:26.904286 kubelet[2129]: I0212 19:46:26.904271 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqq7x\" (UniqueName: \"kubernetes.io/projected/c6801ff7-3e30-4764-a3ce-f8254f7905b4-kube-api-access-pqq7x\") pod \"coredns-787d4945fb-dml5l\" (UID: \"c6801ff7-3e30-4764-a3ce-f8254f7905b4\") " pod="kube-system/coredns-787d4945fb-dml5l" Feb 12 19:46:26.904423 kubelet[2129]: I0212 19:46:26.904412 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07baf74a-f5d0-45f0-9fa3-094dccc746da-config-volume\") pod \"coredns-787d4945fb-fzwdm\" (UID: \"07baf74a-f5d0-45f0-9fa3-094dccc746da\") " pod="kube-system/coredns-787d4945fb-fzwdm" Feb 12 19:46:27.095029 kubelet[2129]: E0212 19:46:27.094908 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:27.095868 env[1189]: time="2024-02-12T19:46:27.095587171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fzwdm,Uid:07baf74a-f5d0-45f0-9fa3-094dccc746da,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:27.105843 kubelet[2129]: E0212 19:46:27.105812 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:27.107001 env[1189]: time="2024-02-12T19:46:27.106945119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-dml5l,Uid:c6801ff7-3e30-4764-a3ce-f8254f7905b4,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:27.330776 kubelet[2129]: E0212 19:46:27.330709 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:28.332810 kubelet[2129]: E0212 19:46:28.332678 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:29.295507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:46:29.295695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:46:29.296042 systemd-networkd[1067]: cilium_host: Link UP Feb 12 19:46:29.296495 systemd-networkd[1067]: cilium_net: Link UP Feb 12 19:46:29.296720 systemd-networkd[1067]: cilium_net: Gained carrier Feb 12 19:46:29.297035 systemd-networkd[1067]: cilium_host: Gained carrier Feb 12 19:46:29.346543 kubelet[2129]: E0212 19:46:29.344651 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:29.484151 systemd-networkd[1067]: cilium_vxlan: Link UP Feb 12 19:46:29.484161 systemd-networkd[1067]: cilium_vxlan: Gained carrier Feb 12 19:46:29.601951 systemd-networkd[1067]: cilium_net: Gained IPv6LL Feb 12 19:46:29.973381 kernel: NET: Registered PF_ALG protocol family Feb 12 19:46:30.067742 systemd-networkd[1067]: cilium_host: Gained IPv6LL Feb 12 19:46:31.076331 systemd-networkd[1067]: lxc_health: Link UP Feb 12 19:46:31.099240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:46:31.098711 systemd-networkd[1067]: lxc_health: Gained carrier Feb 12 19:46:31.286003 systemd-networkd[1067]: cilium_vxlan: Gained IPv6LL Feb 12 19:46:31.394633 systemd-networkd[1067]: lxc92deec1ee213: Link UP Feb 12 19:46:31.403509 systemd-networkd[1067]: lxc4c5586874763: Link UP Feb 12 19:46:31.412335 kernel: eth0: renamed from tmp8f72b Feb 12 19:46:31.436895 kernel: eth0: renamed from tmp8d42c Feb 12 19:46:31.460285 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4c5586874763: link becomes ready Feb 12 19:46:31.457568 systemd-networkd[1067]: lxc4c5586874763: Gained carrier Feb 12 19:46:31.465296 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc92deec1ee213: link becomes ready Feb 12 19:46:31.462754 systemd-networkd[1067]: lxc92deec1ee213: Gained carrier Feb 12 19:46:31.658161 kubelet[2129]: E0212 19:46:31.657499 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:31.704741 kubelet[2129]: I0212 19:46:31.704667 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fw6zt" podStartSLOduration=-9.223372013150158e+09 pod.CreationTimestamp="2024-02-12 19:46:08 +0000 UTC" firstStartedPulling="2024-02-12 19:46:09.790546019 +0000 UTC m=+13.166909103" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:27.399999285 +0000 UTC m=+30.776362392" watchObservedRunningTime="2024-02-12 19:46:31.704618472 +0000 UTC m=+35.080981571" Feb 12 19:46:32.233379 systemd-networkd[1067]: lxc_health: Gained IPv6LL Feb 12 19:46:32.348607 kubelet[2129]: E0212 19:46:32.348565 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:33.001475 systemd-networkd[1067]: lxc92deec1ee213: Gained IPv6LL Feb 12 19:46:33.321510 systemd-networkd[1067]: lxc4c5586874763: Gained IPv6LL Feb 12 19:46:37.971798 env[1189]: time="2024-02-12T19:46:37.964756945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:37.971798 env[1189]: time="2024-02-12T19:46:37.964837481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:37.971798 env[1189]: time="2024-02-12T19:46:37.964853952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:37.971798 env[1189]: time="2024-02-12T19:46:37.965110854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d42cf65807ccb3998cc9bb82f30dbee745742d1743d0fec58d5fb12e840ab54 pid=3314 runtime=io.containerd.runc.v2 Feb 12 19:46:38.119985 systemd[1]: run-containerd-runc-k8s.io-8d42cf65807ccb3998cc9bb82f30dbee745742d1743d0fec58d5fb12e840ab54-runc.xYi1vx.mount: Deactivated successfully. Feb 12 19:46:38.168316 env[1189]: time="2024-02-12T19:46:38.168114546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:38.178674 env[1189]: time="2024-02-12T19:46:38.178591839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:38.179019 env[1189]: time="2024-02-12T19:46:38.178962780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:38.179617 env[1189]: time="2024-02-12T19:46:38.179553371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f72b11529cfe943995c07d28d0996a9f59e112e9c40c9ea7cee64b56350a915 pid=3349 runtime=io.containerd.runc.v2 Feb 12 19:46:38.334271 env[1189]: time="2024-02-12T19:46:38.334164758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fzwdm,Uid:07baf74a-f5d0-45f0-9fa3-094dccc746da,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d42cf65807ccb3998cc9bb82f30dbee745742d1743d0fec58d5fb12e840ab54\"" Feb 12 19:46:38.336447 kubelet[2129]: E0212 19:46:38.336295 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:38.344133 env[1189]: time="2024-02-12T19:46:38.344068393Z" level=info msg="CreateContainer within sandbox \"8d42cf65807ccb3998cc9bb82f30dbee745742d1743d0fec58d5fb12e840ab54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:46:38.440409 env[1189]: time="2024-02-12T19:46:38.440286269Z" level=info msg="CreateContainer within sandbox \"8d42cf65807ccb3998cc9bb82f30dbee745742d1743d0fec58d5fb12e840ab54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6610a94c41e88b09872e99ad4843f4b9cf99fad0d3c8c0308062739336a9b103\"" Feb 12 19:46:38.442106 env[1189]: time="2024-02-12T19:46:38.442049913Z" level=info msg="StartContainer for \"6610a94c41e88b09872e99ad4843f4b9cf99fad0d3c8c0308062739336a9b103\"" Feb 12 19:46:38.627226 env[1189]: time="2024-02-12T19:46:38.627058362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-dml5l,Uid:c6801ff7-3e30-4764-a3ce-f8254f7905b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f72b11529cfe943995c07d28d0996a9f59e112e9c40c9ea7cee64b56350a915\"" Feb 12 19:46:38.629633 kubelet[2129]: E0212 19:46:38.629331 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:38.634967 env[1189]: time="2024-02-12T19:46:38.634897118Z" level=info msg="CreateContainer within sandbox \"8f72b11529cfe943995c07d28d0996a9f59e112e9c40c9ea7cee64b56350a915\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:46:38.689617 env[1189]: time="2024-02-12T19:46:38.689543478Z" level=info msg="CreateContainer within sandbox \"8f72b11529cfe943995c07d28d0996a9f59e112e9c40c9ea7cee64b56350a915\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d299fd45c3d0a8ced07f3fff4741b8cc0b3a81d005675ed25bcfaa75ca25316\"" Feb 12 19:46:38.704241 env[1189]: time="2024-02-12T19:46:38.697417209Z" level=info msg="StartContainer for \"3d299fd45c3d0a8ced07f3fff4741b8cc0b3a81d005675ed25bcfaa75ca25316\"" Feb 12 19:46:38.722001 env[1189]: time="2024-02-12T19:46:38.721895009Z" level=info msg="StartContainer for \"6610a94c41e88b09872e99ad4843f4b9cf99fad0d3c8c0308062739336a9b103\" returns successfully" Feb 12 19:46:38.864806 env[1189]: time="2024-02-12T19:46:38.864702361Z" level=info msg="StartContainer for \"3d299fd45c3d0a8ced07f3fff4741b8cc0b3a81d005675ed25bcfaa75ca25316\" returns successfully" Feb 12 19:46:38.989606 systemd[1]: run-containerd-runc-k8s.io-8f72b11529cfe943995c07d28d0996a9f59e112e9c40c9ea7cee64b56350a915-runc.e2ztAi.mount: Deactivated successfully. Feb 12 19:46:39.410782 kubelet[2129]: E0212 19:46:39.410746 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:39.419122 kubelet[2129]: E0212 19:46:39.419062 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:39.476964 kubelet[2129]: I0212 19:46:39.476921 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fzwdm" podStartSLOduration=31.476835308 pod.CreationTimestamp="2024-02-12 19:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:39.443717516 +0000 UTC m=+42.820080616" watchObservedRunningTime="2024-02-12 19:46:39.476835308 +0000 UTC m=+42.853198415" Feb 12 19:46:40.421320 kubelet[2129]: E0212 19:46:40.421277 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:40.422975 kubelet[2129]: E0212 19:46:40.422933 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:40.459441 kubelet[2129]: I0212 19:46:40.459387 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-dml5l" podStartSLOduration=32.459330538 pod.CreationTimestamp="2024-02-12 19:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:39.526305873 +0000 UTC m=+42.902669303" watchObservedRunningTime="2024-02-12 19:46:40.459330538 +0000 UTC m=+43.835693642" Feb 12 19:46:41.425158 kubelet[2129]: E0212 19:46:41.425111 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:41.426233 kubelet[2129]: E0212 19:46:41.425112 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:46:42.427247 kubelet[2129]: E0212 19:46:42.427163 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:03.116324 kubelet[2129]: E0212 19:47:03.111895 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:21.110932 kubelet[2129]: E0212 19:47:21.110892 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:22.171724 systemd[1]: Started sshd@5-164.90.146.133:22-139.178.68.195:43282.service. Feb 12 19:47:22.240243 sshd[3551]: Accepted publickey for core from 139.178.68.195 port 43282 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:22.243282 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:22.254968 systemd[1]: Started session-6.scope. Feb 12 19:47:22.255835 systemd-logind[1172]: New session 6 of user core. Feb 12 19:47:22.500321 sshd[3551]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:22.504391 systemd[1]: sshd@5-164.90.146.133:22-139.178.68.195:43282.service: Deactivated successfully. Feb 12 19:47:22.507105 systemd-logind[1172]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:47:22.508034 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:47:22.510325 systemd-logind[1172]: Removed session 6. Feb 12 19:47:27.114889 kubelet[2129]: E0212 19:47:27.114436 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:27.510563 systemd[1]: Started sshd@6-164.90.146.133:22-139.178.68.195:35658.service. Feb 12 19:47:27.578593 sshd[3564]: Accepted publickey for core from 139.178.68.195 port 35658 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:27.582162 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:27.596511 systemd[1]: Started session-7.scope. Feb 12 19:47:27.598008 systemd-logind[1172]: New session 7 of user core. Feb 12 19:47:27.881436 sshd[3564]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:27.887513 systemd[1]: sshd@6-164.90.146.133:22-139.178.68.195:35658.service: Deactivated successfully. Feb 12 19:47:27.889624 systemd-logind[1172]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:47:27.889792 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:47:27.894508 systemd-logind[1172]: Removed session 7. Feb 12 19:47:32.887836 systemd[1]: Started sshd@7-164.90.146.133:22-139.178.68.195:35662.service. Feb 12 19:47:32.953308 sshd[3579]: Accepted publickey for core from 139.178.68.195 port 35662 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:32.955951 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:32.963215 systemd-logind[1172]: New session 8 of user core. Feb 12 19:47:32.964575 systemd[1]: Started session-8.scope. Feb 12 19:47:33.125581 sshd[3579]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:33.130710 systemd-logind[1172]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:47:33.131094 systemd[1]: sshd@7-164.90.146.133:22-139.178.68.195:35662.service: Deactivated successfully. Feb 12 19:47:33.132125 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:47:33.133101 systemd-logind[1172]: Removed session 8. Feb 12 19:47:34.110798 kubelet[2129]: E0212 19:47:34.110758 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:38.128897 systemd[1]: Started sshd@8-164.90.146.133:22-139.178.68.195:53570.service. Feb 12 19:47:38.187701 sshd[3593]: Accepted publickey for core from 139.178.68.195 port 53570 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:38.190997 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:38.198752 systemd[1]: Started session-9.scope. Feb 12 19:47:38.199456 systemd-logind[1172]: New session 9 of user core. Feb 12 19:47:38.354010 sshd[3593]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:38.357999 systemd[1]: sshd@8-164.90.146.133:22-139.178.68.195:53570.service: Deactivated successfully. Feb 12 19:47:38.360304 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:47:38.360720 systemd-logind[1172]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:47:38.363008 systemd-logind[1172]: Removed session 9. Feb 12 19:47:43.359031 systemd[1]: Started sshd@9-164.90.146.133:22-139.178.68.195:53582.service. Feb 12 19:47:43.413709 sshd[3608]: Accepted publickey for core from 139.178.68.195 port 53582 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:43.415470 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:43.423264 systemd[1]: Started session-10.scope. Feb 12 19:47:43.423847 systemd-logind[1172]: New session 10 of user core. Feb 12 19:47:43.585978 sshd[3608]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:43.589988 systemd[1]: sshd@9-164.90.146.133:22-139.178.68.195:53582.service: Deactivated successfully. Feb 12 19:47:43.591720 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:47:43.592362 systemd-logind[1172]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:47:43.593652 systemd-logind[1172]: Removed session 10. Feb 12 19:47:44.110472 kubelet[2129]: E0212 19:47:44.110441 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:44.111878 kubelet[2129]: E0212 19:47:44.111846 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:48.604144 systemd[1]: Started sshd@10-164.90.146.133:22-139.178.68.195:59186.service. Feb 12 19:47:48.701156 sshd[3622]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:48.704185 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:48.717934 systemd-logind[1172]: New session 11 of user core. Feb 12 19:47:48.718961 systemd[1]: Started session-11.scope. Feb 12 19:47:48.970272 sshd[3622]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:48.976022 systemd[1]: sshd@10-164.90.146.133:22-139.178.68.195:59186.service: Deactivated successfully. Feb 12 19:47:48.978347 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:47:48.979314 systemd-logind[1172]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:47:48.988640 systemd[1]: Started sshd@11-164.90.146.133:22-139.178.68.195:59198.service. Feb 12 19:47:48.990999 systemd-logind[1172]: Removed session 11. Feb 12 19:47:49.068374 sshd[3636]: Accepted publickey for core from 139.178.68.195 port 59198 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:49.071842 sshd[3636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:49.092475 systemd[1]: Started session-12.scope. Feb 12 19:47:49.093272 systemd-logind[1172]: New session 12 of user core. Feb 12 19:47:51.036101 sshd[3636]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:51.041048 systemd[1]: Started sshd@12-164.90.146.133:22-139.178.68.195:59204.service. Feb 12 19:47:51.059159 systemd[1]: sshd@11-164.90.146.133:22-139.178.68.195:59198.service: Deactivated successfully. Feb 12 19:47:51.061600 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:47:51.084281 systemd-logind[1172]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:47:51.091718 systemd-logind[1172]: Removed session 12. Feb 12 19:47:51.195914 sshd[3645]: Accepted publickey for core from 139.178.68.195 port 59204 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:51.199006 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:51.216453 systemd-logind[1172]: New session 13 of user core. Feb 12 19:47:51.217697 systemd[1]: Started session-13.scope. Feb 12 19:47:51.484676 sshd[3645]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:51.492310 systemd[1]: sshd@12-164.90.146.133:22-139.178.68.195:59204.service: Deactivated successfully. Feb 12 19:47:51.495414 systemd-logind[1172]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:47:51.495739 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:47:51.499861 systemd-logind[1172]: Removed session 13. Feb 12 19:47:56.489780 systemd[1]: Started sshd@13-164.90.146.133:22-139.178.68.195:49454.service. Feb 12 19:47:56.542982 sshd[3659]: Accepted publickey for core from 139.178.68.195 port 49454 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:56.545602 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:56.552296 systemd-logind[1172]: New session 14 of user core. Feb 12 19:47:56.553630 systemd[1]: Started session-14.scope. Feb 12 19:47:56.710261 sshd[3659]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:56.714322 systemd[1]: sshd@13-164.90.146.133:22-139.178.68.195:49454.service: Deactivated successfully. Feb 12 19:47:56.716570 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:47:56.717296 systemd-logind[1172]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:47:56.718479 systemd-logind[1172]: Removed session 14. Feb 12 19:47:59.112751 kubelet[2129]: E0212 19:47:59.112653 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:59.537228 update_engine[1174]: I0212 19:47:59.537123 1174 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 19:47:59.538127 update_engine[1174]: I0212 19:47:59.537340 1174 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 19:47:59.541842 update_engine[1174]: I0212 19:47:59.541775 1174 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 19:47:59.542759 update_engine[1174]: I0212 19:47:59.542538 1174 omaha_request_params.cc:62] Current group set to lts Feb 12 19:47:59.546472 update_engine[1174]: I0212 19:47:59.546330 1174 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 19:47:59.546472 update_engine[1174]: I0212 19:47:59.546356 1174 update_attempter.cc:643] Scheduling an action processor start. Feb 12 19:47:59.546472 update_engine[1174]: I0212 19:47:59.546386 1174 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:47:59.546850 update_engine[1174]: I0212 19:47:59.546486 1174 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 19:47:59.546850 update_engine[1174]: I0212 19:47:59.546794 1174 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:47:59.546850 update_engine[1174]: I0212 19:47:59.546809 1174 omaha_request_action.cc:271] Request: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: Feb 12 19:47:59.546850 update_engine[1174]: I0212 19:47:59.546818 1174 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:47:59.554991 update_engine[1174]: I0212 19:47:59.554940 1174 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:47:59.555189 update_engine[1174]: E0212 19:47:59.555130 1174 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:47:59.555342 update_engine[1174]: I0212 19:47:59.555321 1174 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 19:47:59.572149 locksmithd[1225]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 19:48:01.716060 systemd[1]: Started sshd@14-164.90.146.133:22-139.178.68.195:49456.service. Feb 12 19:48:01.788158 sshd[3674]: Accepted publickey for core from 139.178.68.195 port 49456 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:01.791049 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:01.803337 systemd-logind[1172]: New session 15 of user core. Feb 12 19:48:01.805556 systemd[1]: Started session-15.scope. Feb 12 19:48:01.980167 sshd[3674]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:01.985684 systemd[1]: sshd@14-164.90.146.133:22-139.178.68.195:49456.service: Deactivated successfully. Feb 12 19:48:01.987592 systemd-logind[1172]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:48:01.988046 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:48:01.990282 systemd-logind[1172]: Removed session 15. Feb 12 19:48:06.986139 systemd[1]: Started sshd@15-164.90.146.133:22-139.178.68.195:42644.service. Feb 12 19:48:07.058019 sshd[3687]: Accepted publickey for core from 139.178.68.195 port 42644 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:07.060754 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:07.068651 systemd-logind[1172]: New session 16 of user core. Feb 12 19:48:07.069830 systemd[1]: Started session-16.scope. Feb 12 19:48:07.261978 sshd[3687]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:07.266775 systemd[1]: sshd@15-164.90.146.133:22-139.178.68.195:42644.service: Deactivated successfully. Feb 12 19:48:07.268703 systemd-logind[1172]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:48:07.268878 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:48:07.272035 systemd-logind[1172]: Removed session 16. Feb 12 19:48:08.110381 kubelet[2129]: E0212 19:48:08.110337 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:09.532084 update_engine[1174]: I0212 19:48:09.531965 1174 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:09.532995 update_engine[1174]: I0212 19:48:09.532505 1174 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:09.532995 update_engine[1174]: E0212 19:48:09.532662 1174 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:09.532995 update_engine[1174]: I0212 19:48:09.532792 1174 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 12 19:48:12.276223 systemd[1]: Started sshd@16-164.90.146.133:22-139.178.68.195:42646.service. Feb 12 19:48:12.369935 sshd[3703]: Accepted publickey for core from 139.178.68.195 port 42646 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:12.373535 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:12.387568 systemd[1]: Started session-17.scope. Feb 12 19:48:12.388453 systemd-logind[1172]: New session 17 of user core. Feb 12 19:48:12.678829 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:12.685466 systemd[1]: Started sshd@17-164.90.146.133:22-139.178.68.195:42656.service. Feb 12 19:48:12.696146 systemd[1]: sshd@16-164.90.146.133:22-139.178.68.195:42646.service: Deactivated successfully. Feb 12 19:48:12.700545 systemd-logind[1172]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:48:12.700729 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:48:12.708956 systemd-logind[1172]: Removed session 17. Feb 12 19:48:12.768961 sshd[3714]: Accepted publickey for core from 139.178.68.195 port 42656 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:12.773543 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:12.792563 systemd-logind[1172]: New session 18 of user core. Feb 12 19:48:12.794140 systemd[1]: Started session-18.scope. Feb 12 19:48:13.695509 sshd[3714]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:13.703887 systemd[1]: Started sshd@18-164.90.146.133:22-139.178.68.195:42666.service. Feb 12 19:48:13.713392 systemd[1]: sshd@17-164.90.146.133:22-139.178.68.195:42656.service: Deactivated successfully. Feb 12 19:48:13.713428 systemd-logind[1172]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:48:13.715847 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:48:13.719823 systemd-logind[1172]: Removed session 18. Feb 12 19:48:13.788833 sshd[3725]: Accepted publickey for core from 139.178.68.195 port 42666 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:13.793053 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:13.814859 systemd-logind[1172]: New session 19 of user core. Feb 12 19:48:13.815481 systemd[1]: Started session-19.scope. Feb 12 19:48:15.673432 systemd[1]: Started sshd@19-164.90.146.133:22-139.178.68.195:42676.service. Feb 12 19:48:15.674049 sshd[3725]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:15.687925 systemd[1]: sshd@18-164.90.146.133:22-139.178.68.195:42666.service: Deactivated successfully. Feb 12 19:48:15.690793 systemd-logind[1172]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:48:15.690837 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:48:15.696743 systemd-logind[1172]: Removed session 19. Feb 12 19:48:15.782502 sshd[3744]: Accepted publickey for core from 139.178.68.195 port 42676 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:15.786036 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:15.801650 systemd[1]: Started session-20.scope. Feb 12 19:48:15.805185 systemd-logind[1172]: New session 20 of user core. Feb 12 19:48:16.291639 sshd[3744]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:16.297044 systemd[1]: Started sshd@20-164.90.146.133:22-139.178.68.195:40928.service. Feb 12 19:48:16.307397 systemd-logind[1172]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:48:16.310019 systemd[1]: sshd@19-164.90.146.133:22-139.178.68.195:42676.service: Deactivated successfully. Feb 12 19:48:16.311570 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:48:16.321918 systemd-logind[1172]: Removed session 20. Feb 12 19:48:16.371143 sshd[3806]: Accepted publickey for core from 139.178.68.195 port 40928 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:16.374693 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:16.384567 systemd-logind[1172]: New session 21 of user core. Feb 12 19:48:16.385738 systemd[1]: Started session-21.scope. Feb 12 19:48:16.579159 sshd[3806]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:16.586015 systemd[1]: sshd@20-164.90.146.133:22-139.178.68.195:40928.service: Deactivated successfully. Feb 12 19:48:16.599500 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:48:16.600741 systemd-logind[1172]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:48:16.603872 systemd-logind[1172]: Removed session 21. Feb 12 19:48:19.533054 update_engine[1174]: I0212 19:48:19.532956 1174 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:19.533820 update_engine[1174]: I0212 19:48:19.533349 1174 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:19.533820 update_engine[1174]: E0212 19:48:19.533486 1174 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:19.533820 update_engine[1174]: I0212 19:48:19.533600 1174 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 12 19:48:21.586891 systemd[1]: Started sshd@21-164.90.146.133:22-139.178.68.195:40936.service. Feb 12 19:48:21.640412 sshd[3820]: Accepted publickey for core from 139.178.68.195 port 40936 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:21.643231 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:21.650465 systemd[1]: Started session-22.scope. Feb 12 19:48:21.651512 systemd-logind[1172]: New session 22 of user core. Feb 12 19:48:21.800047 sshd[3820]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:21.803977 systemd[1]: sshd@21-164.90.146.133:22-139.178.68.195:40936.service: Deactivated successfully. Feb 12 19:48:21.806654 systemd-logind[1172]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:48:21.807548 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:48:21.809081 systemd-logind[1172]: Removed session 22. Feb 12 19:48:26.110397 kubelet[2129]: E0212 19:48:26.110339 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:26.806635 systemd[1]: Started sshd@22-164.90.146.133:22-139.178.68.195:43838.service. Feb 12 19:48:26.864006 sshd[3859]: Accepted publickey for core from 139.178.68.195 port 43838 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:26.866582 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:26.873928 systemd[1]: Started session-23.scope. Feb 12 19:48:26.874513 systemd-logind[1172]: New session 23 of user core. Feb 12 19:48:27.025524 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:27.030315 systemd[1]: sshd@22-164.90.146.133:22-139.178.68.195:43838.service: Deactivated successfully. Feb 12 19:48:27.031702 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:48:27.032461 systemd-logind[1172]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:48:27.034170 systemd-logind[1172]: Removed session 23. Feb 12 19:48:29.532543 update_engine[1174]: I0212 19:48:29.532481 1174 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:29.533186 update_engine[1174]: I0212 19:48:29.532763 1174 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:29.533186 update_engine[1174]: E0212 19:48:29.533001 1174 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:29.533186 update_engine[1174]: I0212 19:48:29.533113 1174 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:48:29.533186 update_engine[1174]: I0212 19:48:29.533123 1174 omaha_request_action.cc:621] Omaha request response: Feb 12 19:48:29.533597 update_engine[1174]: E0212 19:48:29.533243 1174 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 12 19:48:29.533818 update_engine[1174]: I0212 19:48:29.533782 1174 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 12 19:48:29.533818 update_engine[1174]: I0212 19:48:29.533807 1174 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:29.533818 update_engine[1174]: I0212 19:48:29.533811 1174 update_attempter.cc:306] Processing Done. Feb 12 19:48:29.533818 update_engine[1174]: E0212 19:48:29.533824 1174 update_attempter.cc:619] Update failed. Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533838 1174 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533841 1174 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533845 1174 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533915 1174 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533936 1174 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533939 1174 omaha_request_action.cc:271] Request: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: Feb 12 19:48:29.534037 update_engine[1174]: I0212 19:48:29.533946 1174 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534144 1174 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:29.534744 update_engine[1174]: E0212 19:48:29.534262 1174 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534341 1174 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534353 1174 omaha_request_action.cc:621] Omaha request response: Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534361 1174 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534366 1174 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534371 1174 update_attempter.cc:306] Processing Done. Feb 12 19:48:29.534744 update_engine[1174]: I0212 19:48:29.534377 1174 update_attempter.cc:310] Error event sent. Feb 12 19:48:29.536000 update_engine[1174]: I0212 19:48:29.535873 1174 update_check_scheduler.cc:74] Next update check in 41m44s Feb 12 19:48:29.536050 locksmithd[1225]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 12 19:48:29.536484 locksmithd[1225]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 12 19:48:32.032500 systemd[1]: Started sshd@23-164.90.146.133:22-139.178.68.195:43852.service. Feb 12 19:48:32.089151 sshd[3872]: Accepted publickey for core from 139.178.68.195 port 43852 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:32.091751 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:32.098630 systemd-logind[1172]: New session 24 of user core. Feb 12 19:48:32.098651 systemd[1]: Started session-24.scope. Feb 12 19:48:32.248922 sshd[3872]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:32.253497 systemd[1]: sshd@23-164.90.146.133:22-139.178.68.195:43852.service: Deactivated successfully. Feb 12 19:48:32.255527 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:48:32.256322 systemd-logind[1172]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:48:32.257394 systemd-logind[1172]: Removed session 24. Feb 12 19:48:37.126171 kubelet[2129]: E0212 19:48:37.126105 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:37.254289 systemd[1]: Started sshd@24-164.90.146.133:22-139.178.68.195:41730.service. Feb 12 19:48:37.332727 sshd[3884]: Accepted publickey for core from 139.178.68.195 port 41730 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:37.335745 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:37.347397 systemd[1]: Started session-25.scope. Feb 12 19:48:37.350745 systemd-logind[1172]: New session 25 of user core. Feb 12 19:48:37.583709 sshd[3884]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:37.591456 systemd[1]: sshd@24-164.90.146.133:22-139.178.68.195:41730.service: Deactivated successfully. Feb 12 19:48:37.593500 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:48:37.601163 systemd-logind[1172]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:48:37.605894 systemd-logind[1172]: Removed session 25. Feb 12 19:48:42.591387 systemd[1]: Started sshd@25-164.90.146.133:22-139.178.68.195:41732.service. Feb 12 19:48:42.646896 sshd[3899]: Accepted publickey for core from 139.178.68.195 port 41732 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:42.649467 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:42.658050 systemd[1]: Started session-26.scope. Feb 12 19:48:42.659503 systemd-logind[1172]: New session 26 of user core. Feb 12 19:48:42.812611 sshd[3899]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:42.817630 systemd[1]: sshd@25-164.90.146.133:22-139.178.68.195:41732.service: Deactivated successfully. Feb 12 19:48:42.819422 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:48:42.820174 systemd-logind[1172]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:48:42.821881 systemd-logind[1172]: Removed session 26. Feb 12 19:48:47.819703 systemd[1]: Started sshd@26-164.90.146.133:22-139.178.68.195:45458.service. Feb 12 19:48:47.876431 sshd[3912]: Accepted publickey for core from 139.178.68.195 port 45458 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:47.879120 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:47.888232 systemd[1]: Started session-27.scope. Feb 12 19:48:47.891375 systemd-logind[1172]: New session 27 of user core. Feb 12 19:48:48.074450 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:48.078969 systemd[1]: sshd@26-164.90.146.133:22-139.178.68.195:45458.service: Deactivated successfully. Feb 12 19:48:48.079346 systemd-logind[1172]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:48:48.080020 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:48:48.081267 systemd-logind[1172]: Removed session 27. Feb 12 19:48:49.111033 kubelet[2129]: E0212 19:48:49.110998 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:52.110814 kubelet[2129]: E0212 19:48:52.110771 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:53.081174 systemd[1]: Started sshd@27-164.90.146.133:22-139.178.68.195:45474.service. Feb 12 19:48:53.141249 sshd[3926]: Accepted publickey for core from 139.178.68.195 port 45474 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:53.143765 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:53.150679 systemd[1]: Started session-28.scope. Feb 12 19:48:53.152149 systemd-logind[1172]: New session 28 of user core. Feb 12 19:48:53.301425 sshd[3926]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:53.305566 systemd[1]: Started sshd@28-164.90.146.133:22-139.178.68.195:45488.service. Feb 12 19:48:53.313442 systemd-logind[1172]: Session 28 logged out. Waiting for processes to exit. Feb 12 19:48:53.314452 systemd[1]: sshd@27-164.90.146.133:22-139.178.68.195:45474.service: Deactivated successfully. Feb 12 19:48:53.315574 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 19:48:53.316661 systemd-logind[1172]: Removed session 28. Feb 12 19:48:53.362676 sshd[3938]: Accepted publickey for core from 139.178.68.195 port 45488 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:53.365361 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:53.373743 systemd[1]: Started session-29.scope. Feb 12 19:48:53.374622 systemd-logind[1172]: New session 29 of user core. Feb 12 19:48:55.171900 env[1189]: time="2024-02-12T19:48:55.171853170Z" level=info msg="StopContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" with timeout 30 (s)" Feb 12 19:48:55.178656 env[1189]: time="2024-02-12T19:48:55.178612724Z" level=info msg="Stop container \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" with signal terminated" Feb 12 19:48:55.184560 systemd[1]: run-containerd-runc-k8s.io-737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9-runc.0AEdyI.mount: Deactivated successfully. Feb 12 19:48:55.221701 env[1189]: time="2024-02-12T19:48:55.221633974Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:48:55.224839 env[1189]: time="2024-02-12T19:48:55.224799847Z" level=info msg="StopContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" with timeout 1 (s)" Feb 12 19:48:55.225577 env[1189]: time="2024-02-12T19:48:55.225544163Z" level=info msg="Stop container \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" with signal terminated" Feb 12 19:48:55.238292 systemd-networkd[1067]: lxc_health: Link DOWN Feb 12 19:48:55.238300 systemd-networkd[1067]: lxc_health: Lost carrier Feb 12 19:48:55.254898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba-rootfs.mount: Deactivated successfully. Feb 12 19:48:55.284156 env[1189]: time="2024-02-12T19:48:55.283369052Z" level=info msg="shim disconnected" id=086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba Feb 12 19:48:55.284507 env[1189]: time="2024-02-12T19:48:55.284477454Z" level=warning msg="cleaning up after shim disconnected" id=086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba namespace=k8s.io Feb 12 19:48:55.284644 env[1189]: time="2024-02-12T19:48:55.284622188Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.309503 env[1189]: time="2024-02-12T19:48:55.309447884Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3997 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.314846 env[1189]: time="2024-02-12T19:48:55.314765135Z" level=info msg="StopContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" returns successfully" Feb 12 19:48:55.316122 env[1189]: time="2024-02-12T19:48:55.316072987Z" level=info msg="StopPodSandbox for \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\"" Feb 12 19:48:55.316494 env[1189]: time="2024-02-12T19:48:55.316462108Z" level=info msg="Container to stop \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.322500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b-shm.mount: Deactivated successfully. Feb 12 19:48:55.339748 env[1189]: time="2024-02-12T19:48:55.339687398Z" level=info msg="shim disconnected" id=737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9 Feb 12 19:48:55.341144 env[1189]: time="2024-02-12T19:48:55.341086984Z" level=warning msg="cleaning up after shim disconnected" id=737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9 namespace=k8s.io Feb 12 19:48:55.341388 env[1189]: time="2024-02-12T19:48:55.341361701Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.372499 env[1189]: time="2024-02-12T19:48:55.372445209Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4031 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.377409 env[1189]: time="2024-02-12T19:48:55.377345223Z" level=info msg="StopContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" returns successfully" Feb 12 19:48:55.378387 env[1189]: time="2024-02-12T19:48:55.378343352Z" level=info msg="StopPodSandbox for \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\"" Feb 12 19:48:55.378799 env[1189]: time="2024-02-12T19:48:55.378720868Z" level=info msg="Container to stop \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.379066 env[1189]: time="2024-02-12T19:48:55.379036532Z" level=info msg="Container to stop \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.379190 env[1189]: time="2024-02-12T19:48:55.379162605Z" level=info msg="Container to stop \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.379517 env[1189]: time="2024-02-12T19:48:55.379490844Z" level=info msg="Container to stop \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.379644 env[1189]: time="2024-02-12T19:48:55.379614742Z" level=info msg="Container to stop \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.406574 env[1189]: time="2024-02-12T19:48:55.406507467Z" level=info msg="shim disconnected" id=ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b Feb 12 19:48:55.408305 env[1189]: time="2024-02-12T19:48:55.408250690Z" level=warning msg="cleaning up after shim disconnected" id=ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b namespace=k8s.io Feb 12 19:48:55.409185 env[1189]: time="2024-02-12T19:48:55.409146260Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.429570 env[1189]: time="2024-02-12T19:48:55.429426259Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.430383 env[1189]: time="2024-02-12T19:48:55.430306745Z" level=info msg="TearDown network for sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" successfully" Feb 12 19:48:55.430637 env[1189]: time="2024-02-12T19:48:55.430541107Z" level=info msg="StopPodSandbox for \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" returns successfully" Feb 12 19:48:55.450253 env[1189]: time="2024-02-12T19:48:55.450098295Z" level=info msg="shim disconnected" id=8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145 Feb 12 19:48:55.450253 env[1189]: time="2024-02-12T19:48:55.450179062Z" level=warning msg="cleaning up after shim disconnected" id=8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145 namespace=k8s.io Feb 12 19:48:55.450253 env[1189]: time="2024-02-12T19:48:55.450239071Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.468601 env[1189]: time="2024-02-12T19:48:55.468365921Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4093 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.469098 env[1189]: time="2024-02-12T19:48:55.469050628Z" level=info msg="TearDown network for sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" successfully" Feb 12 19:48:55.469274 env[1189]: time="2024-02-12T19:48:55.469098175Z" level=info msg="StopPodSandbox for \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" returns successfully" Feb 12 19:48:55.600513 kubelet[2129]: I0212 19:48:55.600161 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hostproc\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.600513 kubelet[2129]: I0212 19:48:55.600291 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-bpf-maps\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.600513 kubelet[2129]: I0212 19:48:55.600334 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hubble-tls\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.600513 kubelet[2129]: I0212 19:48:55.600371 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-xtables-lock\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600545 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fe7cce-2177-4f47-8ea3-871da42fdb33-clustermesh-secrets\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600585 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-cgroup\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600637 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvhc2\" (UniqueName: \"kubernetes.io/projected/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-kube-api-access-nvhc2\") pod \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\" (UID: \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600683 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-config-path\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600851 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-lib-modules\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.601607 kubelet[2129]: I0212 19:48:55.600902 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-run\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.600943 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98rqk\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-kube-api-access-98rqk\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.600971 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-net\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.601003 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-kernel\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.601035 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-cilium-config-path\") pod \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\" (UID: \"30c12b57-fb52-43ad-bcca-cfa14dd7c4f1\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.601630 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cni-path\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602012 kubelet[2129]: I0212 19:48:55.601658 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-etc-cni-netd\") pod \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\" (UID: \"d1fe7cce-2177-4f47-8ea3-871da42fdb33\") " Feb 12 19:48:55.602335 kubelet[2129]: I0212 19:48:55.601730 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.603272 kubelet[2129]: W0212 19:48:55.602643 2129 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d1fe7cce-2177-4f47-8ea3-871da42fdb33/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:55.605787 kubelet[2129]: I0212 19:48:55.605663 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:55.605787 kubelet[2129]: I0212 19:48:55.605750 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.605787 kubelet[2129]: I0212 19:48:55.605769 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.608294 kubelet[2129]: I0212 19:48:55.608226 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-kube-api-access-nvhc2" (OuterVolumeSpecName: "kube-api-access-nvhc2") pod "30c12b57-fb52-43ad-bcca-cfa14dd7c4f1" (UID: "30c12b57-fb52-43ad-bcca-cfa14dd7c4f1"). InnerVolumeSpecName "kube-api-access-nvhc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:55.608517 kubelet[2129]: I0212 19:48:55.608330 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.611047 kubelet[2129]: I0212 19:48:55.610987 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-kube-api-access-98rqk" (OuterVolumeSpecName: "kube-api-access-98rqk") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "kube-api-access-98rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:55.611289 kubelet[2129]: I0212 19:48:55.611072 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.611289 kubelet[2129]: I0212 19:48:55.611098 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.611441 kubelet[2129]: W0212 19:48:55.611298 2129 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:55.613646 kubelet[2129]: I0212 19:48:55.613585 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:55.613846 kubelet[2129]: I0212 19:48:55.613663 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.614234 kubelet[2129]: I0212 19:48:55.614155 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30c12b57-fb52-43ad-bcca-cfa14dd7c4f1" (UID: "30c12b57-fb52-43ad-bcca-cfa14dd7c4f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:55.614563 kubelet[2129]: I0212 19:48:55.600092 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hostproc" (OuterVolumeSpecName: "hostproc") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.614696 kubelet[2129]: I0212 19:48:55.614593 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cni-path" (OuterVolumeSpecName: "cni-path") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.614696 kubelet[2129]: I0212 19:48:55.614648 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.619425 kubelet[2129]: I0212 19:48:55.619360 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fe7cce-2177-4f47-8ea3-871da42fdb33-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d1fe7cce-2177-4f47-8ea3-871da42fdb33" (UID: "d1fe7cce-2177-4f47-8ea3-871da42fdb33"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702031 2129 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-lib-modules\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702160 2129 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fe7cce-2177-4f47-8ea3-871da42fdb33-clustermesh-secrets\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702184 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-cgroup\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702220 2129 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nvhc2\" (UniqueName: \"kubernetes.io/projected/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-kube-api-access-nvhc2\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702240 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-config-path\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702257 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cilium-run\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.702321 kubelet[2129]: I0212 19:48:55.702273 2129 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-98rqk\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-kube-api-access-98rqk\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.702289 2129 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-net\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.704275 2129 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-host-proc-sys-kernel\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.704304 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1-cilium-config-path\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.704320 2129 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-cni-path\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.704337 2129 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-etc-cni-netd\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704281 kubelet[2129]: I0212 19:48:55.704352 2129 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-xtables-lock\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704922 kubelet[2129]: I0212 19:48:55.704365 2129 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hostproc\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704922 kubelet[2129]: I0212 19:48:55.704378 2129 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fe7cce-2177-4f47-8ea3-871da42fdb33-bpf-maps\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.704922 kubelet[2129]: I0212 19:48:55.704390 2129 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fe7cce-2177-4f47-8ea3-871da42fdb33-hubble-tls\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:48:55.943377 kubelet[2129]: I0212 19:48:55.941090 2129 scope.go:115] "RemoveContainer" containerID="086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba" Feb 12 19:48:55.960973 env[1189]: time="2024-02-12T19:48:55.949146132Z" level=info msg="RemoveContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\"" Feb 12 19:48:55.976621 env[1189]: time="2024-02-12T19:48:55.975851123Z" level=info msg="RemoveContainer for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" returns successfully" Feb 12 19:48:55.977565 env[1189]: time="2024-02-12T19:48:55.977229812Z" level=error msg="ContainerStatus for \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\": not found" Feb 12 19:48:55.977905 kubelet[2129]: I0212 19:48:55.976760 2129 scope.go:115] "RemoveContainer" containerID="086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba" Feb 12 19:48:55.979326 kubelet[2129]: E0212 19:48:55.979266 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\": not found" containerID="086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba" Feb 12 19:48:55.981396 kubelet[2129]: I0212 19:48:55.981163 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba} err="failed to get container status \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"086ecb7b4b504c96d72db34a542aa868affdfc9fe6f1adebfe192ea8c68dc5ba\": not found" Feb 12 19:48:55.981642 kubelet[2129]: I0212 19:48:55.981622 2129 scope.go:115] "RemoveContainer" containerID="737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9" Feb 12 19:48:55.994915 env[1189]: time="2024-02-12T19:48:55.994375509Z" level=info msg="RemoveContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\"" Feb 12 19:48:56.005006 env[1189]: time="2024-02-12T19:48:56.004939488Z" level=info msg="RemoveContainer for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" returns successfully" Feb 12 19:48:56.005987 kubelet[2129]: I0212 19:48:56.005956 2129 scope.go:115] "RemoveContainer" containerID="9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459" Feb 12 19:48:56.015443 env[1189]: time="2024-02-12T19:48:56.015382196Z" level=info msg="RemoveContainer for \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\"" Feb 12 19:48:56.035291 env[1189]: time="2024-02-12T19:48:56.035063128Z" level=info msg="RemoveContainer for \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\" returns successfully" Feb 12 19:48:56.037325 kubelet[2129]: I0212 19:48:56.035538 2129 scope.go:115] "RemoveContainer" containerID="9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076" Feb 12 19:48:56.041459 env[1189]: time="2024-02-12T19:48:56.038349586Z" level=info msg="RemoveContainer for \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\"" Feb 12 19:48:56.073302 env[1189]: time="2024-02-12T19:48:56.073171495Z" level=info msg="RemoveContainer for \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\" returns successfully" Feb 12 19:48:56.074123 kubelet[2129]: I0212 19:48:56.074080 2129 scope.go:115] "RemoveContainer" containerID="fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e" Feb 12 19:48:56.077468 env[1189]: time="2024-02-12T19:48:56.077419511Z" level=info msg="RemoveContainer for \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\"" Feb 12 19:48:56.097780 env[1189]: time="2024-02-12T19:48:56.097360082Z" level=info msg="RemoveContainer for \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\" returns successfully" Feb 12 19:48:56.101308 kubelet[2129]: I0212 19:48:56.098543 2129 scope.go:115] "RemoveContainer" containerID="4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43" Feb 12 19:48:56.104962 env[1189]: time="2024-02-12T19:48:56.104916781Z" level=info msg="RemoveContainer for \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\"" Feb 12 19:48:56.118296 env[1189]: time="2024-02-12T19:48:56.118248024Z" level=info msg="RemoveContainer for \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\" returns successfully" Feb 12 19:48:56.119306 kubelet[2129]: I0212 19:48:56.119231 2129 scope.go:115] "RemoveContainer" containerID="737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9" Feb 12 19:48:56.119858 env[1189]: time="2024-02-12T19:48:56.119734971Z" level=error msg="ContainerStatus for \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\": not found" Feb 12 19:48:56.120453 kubelet[2129]: E0212 19:48:56.120280 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\": not found" containerID="737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9" Feb 12 19:48:56.120453 kubelet[2129]: I0212 19:48:56.120321 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9} err="failed to get container status \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9\": not found" Feb 12 19:48:56.120453 kubelet[2129]: I0212 19:48:56.120338 2129 scope.go:115] "RemoveContainer" containerID="9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459" Feb 12 19:48:56.121106 env[1189]: time="2024-02-12T19:48:56.121030866Z" level=error msg="ContainerStatus for \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\": not found" Feb 12 19:48:56.121721 kubelet[2129]: E0212 19:48:56.121509 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\": not found" containerID="9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459" Feb 12 19:48:56.121721 kubelet[2129]: I0212 19:48:56.121603 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459} err="failed to get container status \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\": rpc error: code = NotFound desc = an error occurred when try to find container \"9022e6ec79992cc1ac46872d280655adda6a25299924089f5d1860dc7be78459\": not found" Feb 12 19:48:56.121721 kubelet[2129]: I0212 19:48:56.121622 2129 scope.go:115] "RemoveContainer" containerID="9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076" Feb 12 19:48:56.122431 env[1189]: time="2024-02-12T19:48:56.122193188Z" level=error msg="ContainerStatus for \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\": not found" Feb 12 19:48:56.122888 kubelet[2129]: E0212 19:48:56.122828 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\": not found" containerID="9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076" Feb 12 19:48:56.122999 kubelet[2129]: I0212 19:48:56.122934 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076} err="failed to get container status \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d806186fa14d78c15f0064c1d1d58f08b7b66c46777da229883e1f8f4bd0076\": not found" Feb 12 19:48:56.122999 kubelet[2129]: I0212 19:48:56.122951 2129 scope.go:115] "RemoveContainer" containerID="fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e" Feb 12 19:48:56.123621 env[1189]: time="2024-02-12T19:48:56.123525077Z" level=error msg="ContainerStatus for \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\": not found" Feb 12 19:48:56.130031 kubelet[2129]: E0212 19:48:56.129410 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\": not found" containerID="fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e" Feb 12 19:48:56.130031 kubelet[2129]: I0212 19:48:56.129562 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e} err="failed to get container status \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa419845e478d6c171b2ed19ac490b51e38c90fdb363f86c7bf75b6e6343ad2e\": not found" Feb 12 19:48:56.130031 kubelet[2129]: I0212 19:48:56.129604 2129 scope.go:115] "RemoveContainer" containerID="4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43" Feb 12 19:48:56.131352 env[1189]: time="2024-02-12T19:48:56.131237208Z" level=error msg="ContainerStatus for \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\": not found" Feb 12 19:48:56.132108 kubelet[2129]: E0212 19:48:56.131959 2129 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\": not found" containerID="4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43" Feb 12 19:48:56.132108 kubelet[2129]: I0212 19:48:56.132056 2129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43} err="failed to get container status \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\": rpc error: code = NotFound desc = an error occurred when try to find container \"4065e0ac4c2b85e245c547069e67f3f0d080b2ebabffe8566f2f90424c69ce43\": not found" Feb 12 19:48:56.178118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737c8937f79bdf042b6623d72e998869dfd3eff2e7ef1bebe290c5e7752afaf9-rootfs.mount: Deactivated successfully. Feb 12 19:48:56.178374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b-rootfs.mount: Deactivated successfully. Feb 12 19:48:56.178528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145-rootfs.mount: Deactivated successfully. Feb 12 19:48:56.178633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145-shm.mount: Deactivated successfully. Feb 12 19:48:56.178755 systemd[1]: var-lib-kubelet-pods-d1fe7cce\x2d2177\x2d4f47\x2d8ea3\x2d871da42fdb33-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98rqk.mount: Deactivated successfully. Feb 12 19:48:56.178852 systemd[1]: var-lib-kubelet-pods-30c12b57\x2dfb52\x2d43ad\x2dbcca\x2dcfa14dd7c4f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvhc2.mount: Deactivated successfully. Feb 12 19:48:56.178983 systemd[1]: var-lib-kubelet-pods-d1fe7cce\x2d2177\x2d4f47\x2d8ea3\x2d871da42fdb33-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:48:56.179233 systemd[1]: var-lib-kubelet-pods-d1fe7cce\x2d2177\x2d4f47\x2d8ea3\x2d871da42fdb33-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:48:56.925220 env[1189]: time="2024-02-12T19:48:56.925086936Z" level=info msg="StopPodSandbox for \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\"" Feb 12 19:48:56.926353 env[1189]: time="2024-02-12T19:48:56.926251475Z" level=info msg="TearDown network for sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" successfully" Feb 12 19:48:56.926746 env[1189]: time="2024-02-12T19:48:56.926708457Z" level=info msg="StopPodSandbox for \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" returns successfully" Feb 12 19:48:56.928742 env[1189]: time="2024-02-12T19:48:56.928658250Z" level=info msg="RemovePodSandbox for \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\"" Feb 12 19:48:56.928953 env[1189]: time="2024-02-12T19:48:56.928876616Z" level=info msg="Forcibly stopping sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\"" Feb 12 19:48:56.929094 env[1189]: time="2024-02-12T19:48:56.929025929Z" level=info msg="TearDown network for sandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" successfully" Feb 12 19:48:56.935593 env[1189]: time="2024-02-12T19:48:56.935478035Z" level=info msg="RemovePodSandbox \"8c6ea53b3c80b179705eea8a8b3d96532109a5a6583fb99dace9c197477e4145\" returns successfully" Feb 12 19:48:56.936927 env[1189]: time="2024-02-12T19:48:56.936672310Z" level=info msg="StopPodSandbox for \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\"" Feb 12 19:48:56.937590 env[1189]: time="2024-02-12T19:48:56.937416982Z" level=info msg="TearDown network for sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" successfully" Feb 12 19:48:56.937762 env[1189]: time="2024-02-12T19:48:56.937729063Z" level=info msg="StopPodSandbox for \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" returns successfully" Feb 12 19:48:56.938601 env[1189]: time="2024-02-12T19:48:56.938560555Z" level=info msg="RemovePodSandbox for \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\"" Feb 12 19:48:56.938897 env[1189]: time="2024-02-12T19:48:56.938834090Z" level=info msg="Forcibly stopping sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\"" Feb 12 19:48:56.939740 env[1189]: time="2024-02-12T19:48:56.939700729Z" level=info msg="TearDown network for sandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" successfully" Feb 12 19:48:56.954014 env[1189]: time="2024-02-12T19:48:56.953933487Z" level=info msg="RemovePodSandbox \"ce0d05d459ac49584a26b9635866c61975b8652910c0d3368dea5e8209caf89b\" returns successfully" Feb 12 19:48:57.079869 sshd[3938]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:57.084677 systemd[1]: Started sshd@29-164.90.146.133:22-139.178.68.195:41482.service. Feb 12 19:48:57.094714 systemd[1]: sshd@28-164.90.146.133:22-139.178.68.195:45488.service: Deactivated successfully. Feb 12 19:48:57.101108 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 19:48:57.102650 systemd-logind[1172]: Session 29 logged out. Waiting for processes to exit. Feb 12 19:48:57.106353 systemd-logind[1172]: Removed session 29. Feb 12 19:48:57.121885 kubelet[2129]: I0212 19:48:57.121844 2129 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=30c12b57-fb52-43ad-bcca-cfa14dd7c4f1 path="/var/lib/kubelet/pods/30c12b57-fb52-43ad-bcca-cfa14dd7c4f1/volumes" Feb 12 19:48:57.123555 kubelet[2129]: I0212 19:48:57.123523 2129 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1fe7cce-2177-4f47-8ea3-871da42fdb33 path="/var/lib/kubelet/pods/d1fe7cce-2177-4f47-8ea3-871da42fdb33/volumes" Feb 12 19:48:57.196565 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 41482 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:57.213029 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:57.225026 kubelet[2129]: E0212 19:48:57.224689 2129 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:48:57.235051 systemd[1]: Started session-30.scope. Feb 12 19:48:57.235833 systemd-logind[1172]: New session 30 of user core. Feb 12 19:48:58.529908 sshd[4111]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:58.537659 systemd[1]: Started sshd@30-164.90.146.133:22-139.178.68.195:41498.service. Feb 12 19:48:58.546043 systemd[1]: sshd@29-164.90.146.133:22-139.178.68.195:41482.service: Deactivated successfully. Feb 12 19:48:58.556847 systemd[1]: session-30.scope: Deactivated successfully. Feb 12 19:48:58.562867 systemd-logind[1172]: Session 30 logged out. Waiting for processes to exit. Feb 12 19:48:58.570273 systemd-logind[1172]: Removed session 30. Feb 12 19:48:58.582864 kubelet[2129]: I0212 19:48:58.582808 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:58.583751 kubelet[2129]: E0212 19:48:58.583713 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="clean-cilium-state" Feb 12 19:48:58.584097 kubelet[2129]: E0212 19:48:58.584032 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="cilium-agent" Feb 12 19:48:58.584244 kubelet[2129]: E0212 19:48:58.584226 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="mount-cgroup" Feb 12 19:48:58.584423 kubelet[2129]: E0212 19:48:58.584405 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="apply-sysctl-overwrites" Feb 12 19:48:58.584563 kubelet[2129]: E0212 19:48:58.584506 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="mount-bpf-fs" Feb 12 19:48:58.592451 kubelet[2129]: E0212 19:48:58.592398 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30c12b57-fb52-43ad-bcca-cfa14dd7c4f1" containerName="cilium-operator" Feb 12 19:48:58.593098 kubelet[2129]: I0212 19:48:58.593024 2129 memory_manager.go:346] "RemoveStaleState removing state" podUID="d1fe7cce-2177-4f47-8ea3-871da42fdb33" containerName="cilium-agent" Feb 12 19:48:58.593326 kubelet[2129]: I0212 19:48:58.593310 2129 memory_manager.go:346] "RemoveStaleState removing state" podUID="30c12b57-fb52-43ad-bcca-cfa14dd7c4f1" containerName="cilium-operator" Feb 12 19:48:58.645922 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 41498 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:58.648171 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:58.663779 systemd[1]: Started session-31.scope. Feb 12 19:48:58.664428 systemd-logind[1172]: New session 31 of user core. Feb 12 19:48:58.743399 kubelet[2129]: I0212 19:48:58.743348 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-hostproc\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.746822 kubelet[2129]: I0212 19:48:58.746760 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-xtables-lock\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.746822 kubelet[2129]: I0212 19:48:58.746837 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-bpf-maps\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746864 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-etc-cni-netd\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746886 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-lib-modules\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746907 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cni-path\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746929 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-run\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746952 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-kernel\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747089 kubelet[2129]: I0212 19:48:58.746973 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-config-path\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747556 kubelet[2129]: I0212 19:48:58.747005 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-cgroup\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747556 kubelet[2129]: I0212 19:48:58.747027 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-clustermesh-secrets\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747556 kubelet[2129]: I0212 19:48:58.747049 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-ipsec-secrets\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747556 kubelet[2129]: I0212 19:48:58.747080 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-kube-api-access-vqjvb\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747556 kubelet[2129]: I0212 19:48:58.747116 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-net\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:58.747782 kubelet[2129]: I0212 19:48:58.747144 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-hubble-tls\") pod \"cilium-hbbtr\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " pod="kube-system/cilium-hbbtr" Feb 12 19:48:59.053022 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:59.060854 systemd[1]: Started sshd@31-164.90.146.133:22-139.178.68.195:41506.service. Feb 12 19:48:59.076006 systemd-logind[1172]: Session 31 logged out. Waiting for processes to exit. Feb 12 19:48:59.076477 systemd[1]: sshd@30-164.90.146.133:22-139.178.68.195:41498.service: Deactivated successfully. Feb 12 19:48:59.078538 systemd[1]: session-31.scope: Deactivated successfully. Feb 12 19:48:59.082036 systemd-logind[1172]: Removed session 31. Feb 12 19:48:59.166165 sshd[4141]: Accepted publickey for core from 139.178.68.195 port 41506 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:48:59.168104 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:59.180247 systemd-logind[1172]: New session 32 of user core. Feb 12 19:48:59.181089 systemd[1]: Started session-32.scope. Feb 12 19:48:59.223722 kubelet[2129]: E0212 19:48:59.223526 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.225254 env[1189]: time="2024-02-12T19:48:59.225038674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbbtr,Uid:84da6efe-9127-4c88-8dc5-1bc96f008351,Namespace:kube-system,Attempt:0,}" Feb 12 19:48:59.272316 env[1189]: time="2024-02-12T19:48:59.271344939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:59.272316 env[1189]: time="2024-02-12T19:48:59.271410662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:59.272316 env[1189]: time="2024-02-12T19:48:59.271427267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:59.272316 env[1189]: time="2024-02-12T19:48:59.271889019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0 pid=4155 runtime=io.containerd.runc.v2 Feb 12 19:48:59.385900 env[1189]: time="2024-02-12T19:48:59.385719951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbbtr,Uid:84da6efe-9127-4c88-8dc5-1bc96f008351,Namespace:kube-system,Attempt:0,} returns sandbox id \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\"" Feb 12 19:48:59.388856 kubelet[2129]: E0212 19:48:59.388806 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.402443 env[1189]: time="2024-02-12T19:48:59.398388276Z" level=info msg="CreateContainer within sandbox \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:48:59.493022 env[1189]: time="2024-02-12T19:48:59.492907616Z" level=info msg="CreateContainer within sandbox \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\"" Feb 12 19:48:59.494266 env[1189]: time="2024-02-12T19:48:59.494214516Z" level=info msg="StartContainer for \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\"" Feb 12 19:48:59.628374 env[1189]: time="2024-02-12T19:48:59.625044037Z" level=info msg="StartContainer for \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\" returns successfully" Feb 12 19:48:59.713960 env[1189]: time="2024-02-12T19:48:59.713403610Z" level=info msg="shim disconnected" id=099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939 Feb 12 19:48:59.714464 env[1189]: time="2024-02-12T19:48:59.714401253Z" level=warning msg="cleaning up after shim disconnected" id=099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939 namespace=k8s.io Feb 12 19:48:59.714464 env[1189]: time="2024-02-12T19:48:59.714459267Z" level=info msg="cleaning up dead shim" Feb 12 19:48:59.732170 env[1189]: time="2024-02-12T19:48:59.732063710Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4243 runtime=io.containerd.runc.v2\n" Feb 12 19:49:00.008815 env[1189]: time="2024-02-12T19:49:00.008731841Z" level=info msg="StopPodSandbox for \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\"" Feb 12 19:49:00.009252 env[1189]: time="2024-02-12T19:49:00.009185833Z" level=info msg="Container to stop \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:00.013273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0-shm.mount: Deactivated successfully. Feb 12 19:49:00.139734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0-rootfs.mount: Deactivated successfully. Feb 12 19:49:00.153874 env[1189]: time="2024-02-12T19:49:00.153806463Z" level=info msg="shim disconnected" id=84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0 Feb 12 19:49:00.154670 env[1189]: time="2024-02-12T19:49:00.154530992Z" level=warning msg="cleaning up after shim disconnected" id=84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0 namespace=k8s.io Feb 12 19:49:00.154841 env[1189]: time="2024-02-12T19:49:00.154817017Z" level=info msg="cleaning up dead shim" Feb 12 19:49:00.173011 env[1189]: time="2024-02-12T19:49:00.172892315Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4276 runtime=io.containerd.runc.v2\n" Feb 12 19:49:00.173632 env[1189]: time="2024-02-12T19:49:00.173584605Z" level=info msg="TearDown network for sandbox \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\" successfully" Feb 12 19:49:00.173632 env[1189]: time="2024-02-12T19:49:00.173631344Z" level=info msg="StopPodSandbox for \"84475cedec479ca935231b37d3a544d34191681b0ea3dfce7df5961b4f72acf0\" returns successfully" Feb 12 19:49:00.284072 kubelet[2129]: I0212 19:49:00.282730 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-run\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.284072 kubelet[2129]: I0212 19:49:00.282840 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.284072 kubelet[2129]: I0212 19:49:00.283000 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-config-path\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.286221 kubelet[2129]: W0212 19:49:00.283503 2129 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/84da6efe-9127-4c88-8dc5-1bc96f008351/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:49:00.287039 kubelet[2129]: I0212 19:49:00.286994 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-hostproc\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.287536 kubelet[2129]: I0212 19:49:00.287407 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-hostproc" (OuterVolumeSpecName: "hostproc") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.287869 kubelet[2129]: I0212 19:49:00.287842 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.288140 kubelet[2129]: I0212 19:49:00.288116 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-bpf-maps\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.288439 kubelet[2129]: I0212 19:49:00.288412 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-etc-cni-netd\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.288856 kubelet[2129]: I0212 19:49:00.288836 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-lib-modules\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.289077 kubelet[2129]: I0212 19:49:00.288618 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.289383 kubelet[2129]: I0212 19:49:00.289024 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290570 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-hubble-tls\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290631 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cni-path\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290663 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-ipsec-secrets\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290690 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-xtables-lock\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290715 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-kernel\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291097 kubelet[2129]: I0212 19:49:00.290743 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-cgroup\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290770 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-clustermesh-secrets\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290803 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-kube-api-access-vqjvb\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290842 2129 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-net\") pod \"84da6efe-9127-4c88-8dc5-1bc96f008351\" (UID: \"84da6efe-9127-4c88-8dc5-1bc96f008351\") " Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290902 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-run\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290916 2129 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-hostproc\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290930 2129 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-etc-cni-netd\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.291526 kubelet[2129]: I0212 19:49:00.290943 2129 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-lib-modules\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.291832 kubelet[2129]: I0212 19:49:00.290956 2129 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-bpf-maps\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.291832 kubelet[2129]: I0212 19:49:00.290995 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.293679 kubelet[2129]: I0212 19:49:00.291995 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.293679 kubelet[2129]: I0212 19:49:00.292059 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cni-path" (OuterVolumeSpecName: "cni-path") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.293679 kubelet[2129]: I0212 19:49:00.292411 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.294173 kubelet[2129]: I0212 19:49:00.294147 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.300669 kubelet[2129]: I0212 19:49:00.300600 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:00.303723 systemd[1]: var-lib-kubelet-pods-84da6efe\x2d9127\x2d4c88\x2d8dc5\x2d1bc96f008351-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:00.306179 kubelet[2129]: I0212 19:49:00.306126 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:00.312482 kubelet[2129]: I0212 19:49:00.312400 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:00.316913 kubelet[2129]: I0212 19:49:00.316843 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-kube-api-access-vqjvb" (OuterVolumeSpecName: "kube-api-access-vqjvb") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "kube-api-access-vqjvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:00.318781 systemd[1]: var-lib-kubelet-pods-84da6efe\x2d9127\x2d4c88\x2d8dc5\x2d1bc96f008351-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:00.319057 systemd[1]: var-lib-kubelet-pods-84da6efe\x2d9127\x2d4c88\x2d8dc5\x2d1bc96f008351-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:00.322774 kubelet[2129]: I0212 19:49:00.322717 2129 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "84da6efe-9127-4c88-8dc5-1bc96f008351" (UID: "84da6efe-9127-4c88-8dc5-1bc96f008351"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:00.391302 kubelet[2129]: I0212 19:49:00.391237 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-config-path\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391302 kubelet[2129]: I0212 19:49:00.391310 2129 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-hubble-tls\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391302 kubelet[2129]: I0212 19:49:00.391327 2129 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cni-path\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391343 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-ipsec-secrets\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391380 2129 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-xtables-lock\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391402 2129 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-kernel\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391418 2129 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-cilium-cgroup\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391438 2129 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84da6efe-9127-4c88-8dc5-1bc96f008351-clustermesh-secrets\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391454 2129 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vqjvb\" (UniqueName: \"kubernetes.io/projected/84da6efe-9127-4c88-8dc5-1bc96f008351-kube-api-access-vqjvb\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.391726 kubelet[2129]: I0212 19:49:00.391474 2129 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84da6efe-9127-4c88-8dc5-1bc96f008351-host-proc-sys-net\") on node \"ci-3510.3.2-3-61711c62be\" DevicePath \"\"" Feb 12 19:49:00.876079 systemd[1]: var-lib-kubelet-pods-84da6efe\x2d9127\x2d4c88\x2d8dc5\x2d1bc96f008351-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvqjvb.mount: Deactivated successfully. Feb 12 19:49:01.016876 kubelet[2129]: I0212 19:49:01.016821 2129 scope.go:115] "RemoveContainer" containerID="099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939" Feb 12 19:49:01.021683 env[1189]: time="2024-02-12T19:49:01.021619765Z" level=info msg="RemoveContainer for \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\"" Feb 12 19:49:01.026909 env[1189]: time="2024-02-12T19:49:01.026840640Z" level=info msg="RemoveContainer for \"099f6a30ec20609dd8acf94b3916dc443da0ecffba488491df7943ac1dc4a939\" returns successfully" Feb 12 19:49:01.110361 kubelet[2129]: E0212 19:49:01.110327 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:01.116523 kubelet[2129]: I0212 19:49:01.116463 2129 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=84da6efe-9127-4c88-8dc5-1bc96f008351 path="/var/lib/kubelet/pods/84da6efe-9127-4c88-8dc5-1bc96f008351/volumes" Feb 12 19:49:01.200285 kubelet[2129]: I0212 19:49:01.200105 2129 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:49:01.200285 kubelet[2129]: E0212 19:49:01.200275 2129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84da6efe-9127-4c88-8dc5-1bc96f008351" containerName="mount-cgroup" Feb 12 19:49:01.200577 kubelet[2129]: I0212 19:49:01.200336 2129 memory_manager.go:346] "RemoveStaleState removing state" podUID="84da6efe-9127-4c88-8dc5-1bc96f008351" containerName="mount-cgroup" Feb 12 19:49:01.274664 kubelet[2129]: I0212 19:49:01.274611 2129 setters.go:548] "Node became not ready" node="ci-3510.3.2-3-61711c62be" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:49:01.274496505 +0000 UTC m=+184.650859604 LastTransitionTime:2024-02-12 19:49:01.274496505 +0000 UTC m=+184.650859604 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339380 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-hostproc\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339442 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-etc-cni-netd\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339476 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/655380d8-2598-4c6a-a1f9-e7d316dbc954-cilium-config-path\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339548 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/655380d8-2598-4c6a-a1f9-e7d316dbc954-cilium-ipsec-secrets\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339574 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/655380d8-2598-4c6a-a1f9-e7d316dbc954-clustermesh-secrets\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.347813 kubelet[2129]: I0212 19:49:01.339595 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-cni-path\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339614 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-cilium-cgroup\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339638 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-bpf-maps\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339665 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-xtables-lock\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339689 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-cilium-run\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339720 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-host-proc-sys-kernel\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.348782 kubelet[2129]: I0212 19:49:01.339748 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/655380d8-2598-4c6a-a1f9-e7d316dbc954-hubble-tls\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.349179 kubelet[2129]: I0212 19:49:01.339770 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-lib-modules\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.349179 kubelet[2129]: I0212 19:49:01.339794 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tf4s\" (UniqueName: \"kubernetes.io/projected/655380d8-2598-4c6a-a1f9-e7d316dbc954-kube-api-access-7tf4s\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.349179 kubelet[2129]: I0212 19:49:01.339824 2129 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/655380d8-2598-4c6a-a1f9-e7d316dbc954-host-proc-sys-net\") pod \"cilium-5jktl\" (UID: \"655380d8-2598-4c6a-a1f9-e7d316dbc954\") " pod="kube-system/cilium-5jktl" Feb 12 19:49:01.808098 kubelet[2129]: E0212 19:49:01.808055 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:01.810122 env[1189]: time="2024-02-12T19:49:01.809579422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5jktl,Uid:655380d8-2598-4c6a-a1f9-e7d316dbc954,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:01.847865 env[1189]: time="2024-02-12T19:49:01.847566422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:01.847865 env[1189]: time="2024-02-12T19:49:01.847826514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:01.848187 env[1189]: time="2024-02-12T19:49:01.847899296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:01.860625 env[1189]: time="2024-02-12T19:49:01.859501793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435 pid=4306 runtime=io.containerd.runc.v2 Feb 12 19:49:02.043848 env[1189]: time="2024-02-12T19:49:02.043781791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5jktl,Uid:655380d8-2598-4c6a-a1f9-e7d316dbc954,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\"" Feb 12 19:49:02.048236 kubelet[2129]: E0212 19:49:02.048157 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:02.083852 env[1189]: time="2024-02-12T19:49:02.083667629Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:02.150409 env[1189]: time="2024-02-12T19:49:02.150328851Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb\"" Feb 12 19:49:02.153827 env[1189]: time="2024-02-12T19:49:02.153355376Z" level=info msg="StartContainer for \"1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb\"" Feb 12 19:49:02.227335 kubelet[2129]: E0212 19:49:02.227278 2129 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:02.290711 env[1189]: time="2024-02-12T19:49:02.290595517Z" level=info msg="StartContainer for \"1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb\" returns successfully" Feb 12 19:49:02.385079 env[1189]: time="2024-02-12T19:49:02.380485280Z" level=info msg="shim disconnected" id=1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb Feb 12 19:49:02.385079 env[1189]: time="2024-02-12T19:49:02.380542926Z" level=warning msg="cleaning up after shim disconnected" id=1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb namespace=k8s.io Feb 12 19:49:02.385079 env[1189]: time="2024-02-12T19:49:02.380552885Z" level=info msg="cleaning up dead shim" Feb 12 19:49:02.405648 env[1189]: time="2024-02-12T19:49:02.405508862Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4390 runtime=io.containerd.runc.v2\n" Feb 12 19:49:02.880382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9afb27c7d93f6c4aae1c058885e3b929c4c04182a5559366d1200453620adb-rootfs.mount: Deactivated successfully. Feb 12 19:49:03.031027 kubelet[2129]: E0212 19:49:03.030988 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:03.046325 env[1189]: time="2024-02-12T19:49:03.041090907Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:49:03.121775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463085632.mount: Deactivated successfully. Feb 12 19:49:03.135967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935020035.mount: Deactivated successfully. Feb 12 19:49:03.165805 env[1189]: time="2024-02-12T19:49:03.165714395Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f082675db508095602e08b02a01666adbbcf10df9051de2bbaa73795b77e8053\"" Feb 12 19:49:03.167681 env[1189]: time="2024-02-12T19:49:03.167622161Z" level=info msg="StartContainer for \"f082675db508095602e08b02a01666adbbcf10df9051de2bbaa73795b77e8053\"" Feb 12 19:49:03.305618 env[1189]: time="2024-02-12T19:49:03.300192160Z" level=info msg="StartContainer for \"f082675db508095602e08b02a01666adbbcf10df9051de2bbaa73795b77e8053\" returns successfully" Feb 12 19:49:03.379485 env[1189]: time="2024-02-12T19:49:03.378057938Z" level=info msg="shim disconnected" id=f082675db508095602e08b02a01666adbbcf10df9051de2bbaa73795b77e8053 Feb 12 19:49:03.379485 env[1189]: time="2024-02-12T19:49:03.378167407Z" level=warning msg="cleaning up after shim disconnected" id=f082675db508095602e08b02a01666adbbcf10df9051de2bbaa73795b77e8053 namespace=k8s.io Feb 12 19:49:03.379485 env[1189]: time="2024-02-12T19:49:03.378190851Z" level=info msg="cleaning up dead shim" Feb 12 19:49:03.411379 env[1189]: time="2024-02-12T19:49:03.409831946Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4452 runtime=io.containerd.runc.v2\n" Feb 12 19:49:04.043759 kubelet[2129]: E0212 19:49:04.043720 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:04.056193 env[1189]: time="2024-02-12T19:49:04.056056308Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:49:04.100267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186330295.mount: Deactivated successfully. Feb 12 19:49:04.122047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088369910.mount: Deactivated successfully. Feb 12 19:49:04.137808 env[1189]: time="2024-02-12T19:49:04.137738606Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"479b08af3c5cf10f29c7493428a0b4e44d784404dfb50c4032b1af94006c93dc\"" Feb 12 19:49:04.145705 env[1189]: time="2024-02-12T19:49:04.143056478Z" level=info msg="StartContainer for \"479b08af3c5cf10f29c7493428a0b4e44d784404dfb50c4032b1af94006c93dc\"" Feb 12 19:49:04.288025 env[1189]: time="2024-02-12T19:49:04.287954594Z" level=info msg="StartContainer for \"479b08af3c5cf10f29c7493428a0b4e44d784404dfb50c4032b1af94006c93dc\" returns successfully" Feb 12 19:49:04.376439 env[1189]: time="2024-02-12T19:49:04.376260330Z" level=info msg="shim disconnected" id=479b08af3c5cf10f29c7493428a0b4e44d784404dfb50c4032b1af94006c93dc Feb 12 19:49:04.377470 env[1189]: time="2024-02-12T19:49:04.377410765Z" level=warning msg="cleaning up after shim disconnected" id=479b08af3c5cf10f29c7493428a0b4e44d784404dfb50c4032b1af94006c93dc namespace=k8s.io Feb 12 19:49:04.377775 env[1189]: time="2024-02-12T19:49:04.377685642Z" level=info msg="cleaning up dead shim" Feb 12 19:49:04.407865 env[1189]: time="2024-02-12T19:49:04.407798107Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4511 runtime=io.containerd.runc.v2\n" Feb 12 19:49:05.055459 kubelet[2129]: E0212 19:49:05.055421 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:05.066221 env[1189]: time="2024-02-12T19:49:05.065683123Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:49:05.098880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008487353.mount: Deactivated successfully. Feb 12 19:49:05.119395 env[1189]: time="2024-02-12T19:49:05.119294887Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf9a05b699116c5231ab1176cf18126361ae7e82486b45fab1263493fb701940\"" Feb 12 19:49:05.122745 env[1189]: time="2024-02-12T19:49:05.122693353Z" level=info msg="StartContainer for \"cf9a05b699116c5231ab1176cf18126361ae7e82486b45fab1263493fb701940\"" Feb 12 19:49:05.247013 env[1189]: time="2024-02-12T19:49:05.246940541Z" level=info msg="StartContainer for \"cf9a05b699116c5231ab1176cf18126361ae7e82486b45fab1263493fb701940\" returns successfully" Feb 12 19:49:05.311844 env[1189]: time="2024-02-12T19:49:05.311663471Z" level=info msg="shim disconnected" id=cf9a05b699116c5231ab1176cf18126361ae7e82486b45fab1263493fb701940 Feb 12 19:49:05.311844 env[1189]: time="2024-02-12T19:49:05.311733913Z" level=warning msg="cleaning up after shim disconnected" id=cf9a05b699116c5231ab1176cf18126361ae7e82486b45fab1263493fb701940 namespace=k8s.io Feb 12 19:49:05.311844 env[1189]: time="2024-02-12T19:49:05.311748323Z" level=info msg="cleaning up dead shim" Feb 12 19:49:05.336616 env[1189]: time="2024-02-12T19:49:05.336427180Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4569 runtime=io.containerd.runc.v2\n" Feb 12 19:49:06.076113 kubelet[2129]: E0212 19:49:06.076071 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:06.086924 env[1189]: time="2024-02-12T19:49:06.086837761Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:49:06.131975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812034816.mount: Deactivated successfully. Feb 12 19:49:06.149789 env[1189]: time="2024-02-12T19:49:06.149676301Z" level=info msg="CreateContainer within sandbox \"2a827370d059601fe61e0764507b2c2a9b3796296a8999551c207fe602e39435\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7\"" Feb 12 19:49:06.151253 env[1189]: time="2024-02-12T19:49:06.151186396Z" level=info msg="StartContainer for \"78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7\"" Feb 12 19:49:06.283483 env[1189]: time="2024-02-12T19:49:06.283388704Z" level=info msg="StartContainer for \"78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7\" returns successfully" Feb 12 19:49:06.998286 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:49:07.085180 kubelet[2129]: E0212 19:49:07.085135 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:07.925714 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.7NwGUe.mount: Deactivated successfully. Feb 12 19:49:08.087173 kubelet[2129]: E0212 19:49:08.087126 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:09.089283 kubelet[2129]: E0212 19:49:09.089190 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:10.271600 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.lAmaJl.mount: Deactivated successfully. Feb 12 19:49:10.471770 systemd-networkd[1067]: lxc_health: Link UP Feb 12 19:49:10.479746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:49:10.478400 systemd-networkd[1067]: lxc_health: Gained carrier Feb 12 19:49:11.811837 kubelet[2129]: E0212 19:49:11.811797 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:11.846927 kubelet[2129]: I0212 19:49:11.846868 2129 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5jktl" podStartSLOduration=10.846789399 pod.CreationTimestamp="2024-02-12 19:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:49:07.117804565 +0000 UTC m=+190.494167671" watchObservedRunningTime="2024-02-12 19:49:11.846789399 +0000 UTC m=+195.223152513" Feb 12 19:49:12.096456 kubelet[2129]: E0212 19:49:12.096033 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:12.363348 systemd-networkd[1067]: lxc_health: Gained IPv6LL Feb 12 19:49:12.709900 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.u7FWbZ.mount: Deactivated successfully. Feb 12 19:49:13.099566 kubelet[2129]: E0212 19:49:13.099527 2129 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:15.206853 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.rw70UG.mount: Deactivated successfully. Feb 12 19:49:17.431478 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.LTEIbZ.mount: Deactivated successfully. Feb 12 19:49:19.720301 systemd[1]: run-containerd-runc-k8s.io-78ad91532238c78414c21fb78a7fe346d73077012841200f60d6f6c580a67eb7-runc.EZvgmp.mount: Deactivated successfully. Feb 12 19:49:20.021953 sshd[4141]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:20.033800 systemd[1]: sshd@31-164.90.146.133:22-139.178.68.195:41506.service: Deactivated successfully. Feb 12 19:49:20.035734 systemd[1]: session-32.scope: Deactivated successfully. Feb 12 19:49:20.040289 systemd-logind[1172]: Session 32 logged out. Waiting for processes to exit. Feb 12 19:49:20.042399 systemd-logind[1172]: Removed session 32.