Feb 12 19:42:56.073120 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 12 19:42:56.073197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:42:56.073218 kernel: BIOS-provided physical RAM map: Feb 12 19:42:56.073228 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 19:42:56.073237 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 19:42:56.073246 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 19:42:56.073258 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 12 19:42:56.073268 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 12 19:42:56.073281 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 19:42:56.073291 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 19:42:56.073302 kernel: NX (Execute Disable) protection: active Feb 12 19:42:56.073311 kernel: SMBIOS 2.8 present. Feb 12 19:42:56.073320 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 12 19:42:56.073329 kernel: Hypervisor detected: KVM Feb 12 19:42:56.073342 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:42:56.073356 kernel: kvm-clock: cpu 0, msr 5efaa001, primary cpu clock Feb 12 19:42:56.073366 kernel: kvm-clock: using sched offset of 5271338635 cycles Feb 12 19:42:56.073377 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:42:56.073387 kernel: tsc: Detected 2494.138 MHz processor Feb 12 19:42:56.073397 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:42:56.073407 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:42:56.073417 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 12 19:42:56.073427 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:42:56.073441 kernel: ACPI: Early table checksum verification disabled Feb 12 19:42:56.073453 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 12 19:42:56.073463 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073473 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073484 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073494 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 19:42:56.073504 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073514 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073525 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073538 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:42:56.073549 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 12 19:42:56.073560 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 12 19:42:56.073571 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 19:42:56.073581 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 12 19:42:56.073592 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 12 19:42:56.073605 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 12 19:42:56.073618 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 12 19:42:56.073639 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:42:56.073653 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:42:56.073665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 19:42:56.073679 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 19:42:56.073693 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 12 19:42:56.073703 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 12 19:42:56.073718 kernel: Zone ranges: Feb 12 19:42:56.073729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:42:56.073739 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 12 19:42:56.073750 kernel: Normal empty Feb 12 19:42:56.073761 kernel: Movable zone start for each node Feb 12 19:42:56.073772 kernel: Early memory node ranges Feb 12 19:42:56.073782 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 19:42:56.073794 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 12 19:42:56.073805 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 12 19:42:56.073819 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:42:56.073832 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 19:42:56.073842 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 12 19:42:56.073853 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 19:42:56.073863 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:42:56.073874 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:42:56.073885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:42:56.073896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:42:56.073908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:42:56.073923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:42:56.073933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:42:56.073944 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:42:56.073955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:42:56.073967 kernel: TSC deadline timer available Feb 12 19:42:56.073983 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:42:56.073999 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 19:42:56.074014 kernel: Booting paravirtualized kernel on KVM Feb 12 19:42:56.074029 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:42:56.074077 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:42:56.074092 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:42:56.074107 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:42:56.074121 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:42:56.074137 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 19:42:56.074152 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 19:42:56.074169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 12 19:42:56.074183 kernel: Policy zone: DMA32 Feb 12 19:42:56.074200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:42:56.074228 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:42:56.074244 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:42:56.074259 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:42:56.074275 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:42:56.074309 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 19:42:56.074324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:42:56.074339 kernel: Kernel/User page tables isolation: enabled Feb 12 19:42:56.074353 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:42:56.074374 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:42:56.074388 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:42:56.074404 kernel: rcu: RCU event tracing is enabled. Feb 12 19:42:56.074419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:42:56.074434 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:42:56.074449 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:42:56.074464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:42:56.074479 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:42:56.074494 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 19:42:56.074516 kernel: random: crng init done Feb 12 19:42:56.074531 kernel: Console: colour VGA+ 80x25 Feb 12 19:42:56.074546 kernel: printk: console [tty0] enabled Feb 12 19:42:56.074560 kernel: printk: console [ttyS0] enabled Feb 12 19:42:56.074574 kernel: ACPI: Core revision 20210730 Feb 12 19:42:56.074589 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:42:56.074603 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:42:56.074618 kernel: x2apic enabled Feb 12 19:42:56.074631 kernel: Switched APIC routing to physical x2apic. Feb 12 19:42:56.074652 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:42:56.074666 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 12 19:42:56.074677 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Feb 12 19:42:56.074689 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 19:42:56.074700 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 19:42:56.074711 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:42:56.074723 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:42:56.074734 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:42:56.074745 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:42:56.074761 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 12 19:42:56.074784 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:42:56.074796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:42:56.074812 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 19:42:56.074824 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:42:56.074835 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:42:56.074846 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:42:56.074858 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:42:56.074870 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:42:56.074930 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:42:56.074948 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:42:56.074960 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:42:56.074971 kernel: LSM: Security Framework initializing Feb 12 19:42:56.074983 kernel: SELinux: Initializing. Feb 12 19:42:56.074994 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:42:56.075006 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:42:56.075080 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 12 19:42:56.075095 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 19:42:56.075107 kernel: signal: max sigframe size: 1776 Feb 12 19:42:56.075120 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:42:56.075133 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:42:56.075147 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:42:56.075158 kernel: x86: Booting SMP configuration: Feb 12 19:42:56.075170 kernel: .... node #0, CPUs: #1 Feb 12 19:42:56.075182 kernel: kvm-clock: cpu 1, msr 5efaa041, secondary cpu clock Feb 12 19:42:56.075193 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 19:42:56.075212 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:42:56.075224 kernel: smpboot: Max logical packages: 1 Feb 12 19:42:56.075236 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Feb 12 19:42:56.075250 kernel: devtmpfs: initialized Feb 12 19:42:56.075263 kernel: x86/mm: Memory block size: 128MB Feb 12 19:42:56.075278 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:42:56.075290 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:42:56.075302 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:42:56.075314 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:42:56.075330 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:42:56.075343 kernel: audit: type=2000 audit(1707766974.841:1): state=initialized audit_enabled=0 res=1 Feb 12 19:42:56.075356 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:42:56.075371 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:42:56.075383 kernel: cpuidle: using governor menu Feb 12 19:42:56.075396 kernel: ACPI: bus type PCI registered Feb 12 19:42:56.075409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:42:56.075421 kernel: dca service started, version 1.12.1 Feb 12 19:42:56.075433 kernel: PCI: Using configuration type 1 for base access Feb 12 19:42:56.075450 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:42:56.075463 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:42:56.075474 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:42:56.075487 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:42:56.075499 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:42:56.075509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:42:56.075521 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:42:56.075532 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:42:56.075544 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:42:56.075561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:42:56.075572 kernel: ACPI: Interpreter enabled Feb 12 19:42:56.075583 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:42:56.075596 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:42:56.075607 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:42:56.075619 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:42:56.075631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:42:56.075960 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:42:56.076154 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 19:42:56.076175 kernel: acpiphp: Slot [3] registered Feb 12 19:42:56.076187 kernel: acpiphp: Slot [4] registered Feb 12 19:42:56.076200 kernel: acpiphp: Slot [5] registered Feb 12 19:42:56.076212 kernel: acpiphp: Slot [6] registered Feb 12 19:42:56.076223 kernel: acpiphp: Slot [7] registered Feb 12 19:42:56.076235 kernel: acpiphp: Slot [8] registered Feb 12 19:42:56.076246 kernel: acpiphp: Slot [9] registered Feb 12 19:42:56.076265 kernel: acpiphp: Slot [10] registered Feb 12 19:42:56.076276 kernel: acpiphp: Slot [11] registered Feb 12 19:42:56.076289 kernel: acpiphp: Slot [12] registered Feb 12 19:42:56.076301 kernel: acpiphp: Slot [13] registered Feb 12 19:42:56.076312 kernel: acpiphp: Slot [14] registered Feb 12 19:42:56.076324 kernel: acpiphp: Slot [15] registered Feb 12 19:42:56.076335 kernel: acpiphp: Slot [16] registered Feb 12 19:42:56.076347 kernel: acpiphp: Slot [17] registered Feb 12 19:42:56.076358 kernel: acpiphp: Slot [18] registered Feb 12 19:42:56.076370 kernel: acpiphp: Slot [19] registered Feb 12 19:42:56.076386 kernel: acpiphp: Slot [20] registered Feb 12 19:42:56.076399 kernel: acpiphp: Slot [21] registered Feb 12 19:42:56.076411 kernel: acpiphp: Slot [22] registered Feb 12 19:42:56.076422 kernel: acpiphp: Slot [23] registered Feb 12 19:42:56.076435 kernel: acpiphp: Slot [24] registered Feb 12 19:42:56.076447 kernel: acpiphp: Slot [25] registered Feb 12 19:42:56.076459 kernel: acpiphp: Slot [26] registered Feb 12 19:42:56.076470 kernel: acpiphp: Slot [27] registered Feb 12 19:42:56.076482 kernel: acpiphp: Slot [28] registered Feb 12 19:42:56.076499 kernel: acpiphp: Slot [29] registered Feb 12 19:42:56.076511 kernel: acpiphp: Slot [30] registered Feb 12 19:42:56.076523 kernel: acpiphp: Slot [31] registered Feb 12 19:42:56.076535 kernel: PCI host bridge to bus 0000:00 Feb 12 19:42:56.076713 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:42:56.076854 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:42:56.076974 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:42:56.077116 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 19:42:56.077251 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 19:42:56.077394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:42:56.077584 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:42:56.077740 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:42:56.077933 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:42:56.080255 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 12 19:42:56.080483 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:42:56.080670 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:42:56.080875 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:42:56.081114 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:42:56.081323 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 12 19:42:56.081473 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 12 19:42:56.081652 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:42:56.081849 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 19:42:56.081992 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 19:42:56.087321 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 19:42:56.087561 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 19:42:56.087731 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 19:42:56.087876 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 12 19:42:56.088044 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 19:42:56.088219 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:42:56.088376 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:42:56.088554 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 12 19:42:56.088694 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 12 19:42:56.088834 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 19:42:56.089001 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:42:56.089219 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 12 19:42:56.089369 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 12 19:42:56.089515 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 19:42:56.089677 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 12 19:42:56.089819 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 12 19:42:56.089961 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 12 19:42:56.090127 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 19:42:56.090578 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:42:56.090736 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:42:56.090875 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 12 19:42:56.091013 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 19:42:56.091180 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:42:56.091322 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 12 19:42:56.091465 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 12 19:42:56.091623 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 12 19:42:56.091775 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 19:42:56.091916 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 12 19:42:56.097395 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 12 19:42:56.097444 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:42:56.097460 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:42:56.097476 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:42:56.097499 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:42:56.097513 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:42:56.097528 kernel: iommu: Default domain type: Translated Feb 12 19:42:56.097543 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:42:56.097698 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:42:56.097835 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:42:56.097971 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:42:56.097987 kernel: vgaarb: loaded Feb 12 19:42:56.098007 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:42:56.098035 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:42:56.098074 kernel: PTP clock support registered Feb 12 19:42:56.098089 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:42:56.098104 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:42:56.098118 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 19:42:56.098133 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 12 19:42:56.098147 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:42:56.098160 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:42:56.098176 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:42:56.098189 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:42:56.098205 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:42:56.098218 kernel: pnp: PnP ACPI init Feb 12 19:42:56.098233 kernel: pnp: PnP ACPI: found 4 devices Feb 12 19:42:56.098246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:42:56.098259 kernel: NET: Registered PF_INET protocol family Feb 12 19:42:56.098271 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:42:56.098297 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 19:42:56.098314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:42:56.098326 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:42:56.098338 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 19:42:56.098350 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 19:42:56.098363 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:42:56.098375 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:42:56.098386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:42:56.098398 kernel: NET: Registered PF_XDP protocol family Feb 12 19:42:56.098570 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:42:56.098712 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:42:56.098846 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:42:56.098979 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 19:42:56.099187 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 19:42:56.099354 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:42:56.099503 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:42:56.099653 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:42:56.099683 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 19:42:56.099829 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 48781 usecs Feb 12 19:42:56.099850 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:42:56.099862 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:42:56.099876 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 12 19:42:56.099888 kernel: Initialise system trusted keyrings Feb 12 19:42:56.099902 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 19:42:56.099915 kernel: Key type asymmetric registered Feb 12 19:42:56.099928 kernel: Asymmetric key parser 'x509' registered Feb 12 19:42:56.099946 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:42:56.099959 kernel: io scheduler mq-deadline registered Feb 12 19:42:56.099972 kernel: io scheduler kyber registered Feb 12 19:42:56.099985 kernel: io scheduler bfq registered Feb 12 19:42:56.099998 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:42:56.100096 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 19:42:56.100115 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:42:56.100127 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:42:56.100140 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:42:56.100152 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:42:56.100181 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:42:56.100195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:42:56.100207 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:42:56.100220 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:42:56.100476 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 12 19:42:56.100609 kernel: rtc_cmos 00:03: registered as rtc0 Feb 12 19:42:56.100737 kernel: rtc_cmos 00:03: setting system clock to 2024-02-12T19:42:55 UTC (1707766975) Feb 12 19:42:56.100862 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 12 19:42:56.100879 kernel: intel_pstate: CPU model not supported Feb 12 19:42:56.100892 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:42:56.100905 kernel: Segment Routing with IPv6 Feb 12 19:42:56.100918 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:42:56.100930 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:42:56.100941 kernel: Key type dns_resolver registered Feb 12 19:42:56.100954 kernel: IPI shorthand broadcast: enabled Feb 12 19:42:56.100965 kernel: sched_clock: Marking stable (841090200, 149897782)->(1232266867, -241278885) Feb 12 19:42:56.100984 kernel: registered taskstats version 1 Feb 12 19:42:56.100995 kernel: Loading compiled-in X.509 certificates Feb 12 19:42:56.101007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 12 19:42:56.101019 kernel: Key type .fscrypt registered Feb 12 19:42:56.101030 kernel: Key type fscrypt-provisioning registered Feb 12 19:42:56.101042 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:42:56.101090 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:42:56.101102 kernel: ima: No architecture policies found Feb 12 19:42:56.101115 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:42:56.101131 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:42:56.101145 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:42:56.101159 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:42:56.101171 kernel: Run /init as init process Feb 12 19:42:56.101184 kernel: with arguments: Feb 12 19:42:56.101203 kernel: /init Feb 12 19:42:56.101241 kernel: with environment: Feb 12 19:42:56.101256 kernel: HOME=/ Feb 12 19:42:56.101268 kernel: TERM=linux Feb 12 19:42:56.101284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:42:56.101303 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:42:56.101323 systemd[1]: Detected virtualization kvm. Feb 12 19:42:56.101338 systemd[1]: Detected architecture x86-64. Feb 12 19:42:56.101350 systemd[1]: Running in initrd. Feb 12 19:42:56.101363 systemd[1]: No hostname configured, using default hostname. Feb 12 19:42:56.101376 systemd[1]: Hostname set to . Feb 12 19:42:56.101396 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:42:56.101410 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:42:56.101426 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:42:56.101440 systemd[1]: Reached target cryptsetup.target. Feb 12 19:42:56.101456 systemd[1]: Reached target paths.target. Feb 12 19:42:56.101471 systemd[1]: Reached target slices.target. Feb 12 19:42:56.101487 systemd[1]: Reached target swap.target. Feb 12 19:42:56.101504 systemd[1]: Reached target timers.target. Feb 12 19:42:56.101524 systemd[1]: Listening on iscsid.socket. Feb 12 19:42:56.101541 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:42:56.101557 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:42:56.101574 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:42:56.101591 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:42:56.101607 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:42:56.101624 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:42:56.101641 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:42:56.101660 systemd[1]: Reached target sockets.target. Feb 12 19:42:56.101677 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:42:56.101691 systemd[1]: Finished network-cleanup.service. Feb 12 19:42:56.101706 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:42:56.101719 systemd[1]: Starting systemd-journald.service... Feb 12 19:42:56.101731 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:42:56.101748 systemd[1]: Starting systemd-resolved.service... Feb 12 19:42:56.101761 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:42:56.101773 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:42:56.101787 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:42:56.101800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:42:56.101821 systemd-journald[185]: Journal started Feb 12 19:42:56.101927 systemd-journald[185]: Runtime Journal (/run/log/journal/efdce518bc0949d0bc924b2cb39f7188) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:42:56.067605 systemd-modules-load[186]: Inserted module 'overlay' Feb 12 19:42:56.169513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:42:56.169548 kernel: Bridge firewalling registered Feb 12 19:42:56.169579 kernel: SCSI subsystem initialized Feb 12 19:42:56.169663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:42:56.169684 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:42:56.169703 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:42:56.120579 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 12 19:42:56.175642 systemd[1]: Started systemd-journald.service. Feb 12 19:42:56.156947 systemd-resolved[187]: Positive Trust Anchors: Feb 12 19:42:56.180407 kernel: audit: type=1130 audit(1707766976.176:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.156960 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:42:56.186009 kernel: audit: type=1130 audit(1707766976.180:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.157011 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:42:56.192125 kernel: audit: type=1130 audit(1707766976.186:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.161346 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 12 19:42:56.196730 kernel: audit: type=1130 audit(1707766976.192:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.176369 systemd[1]: Started systemd-resolved.service. Feb 12 19:42:56.203015 kernel: audit: type=1130 audit(1707766976.197:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.181408 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:42:56.185031 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 12 19:42:56.187448 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:42:56.192919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:42:56.198739 systemd[1]: Reached target nss-lookup.target. Feb 12 19:42:56.202665 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:42:56.205185 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:42:56.221890 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:42:56.234826 kernel: audit: type=1130 audit(1707766976.222:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.235093 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:42:56.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.236555 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:42:56.240012 kernel: audit: type=1130 audit(1707766976.235:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.253631 dracut-cmdline[206]: dracut-dracut-053 Feb 12 19:42:56.258188 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:42:56.382105 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:42:56.398083 kernel: iscsi: registered transport (tcp) Feb 12 19:42:56.428106 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:42:56.428226 kernel: QLogic iSCSI HBA Driver Feb 12 19:42:56.497885 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:42:56.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.502088 kernel: audit: type=1130 audit(1707766976.498:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.500391 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:42:56.566181 kernel: raid6: avx2x4 gen() 19485 MB/s Feb 12 19:42:56.583149 kernel: raid6: avx2x4 xor() 5365 MB/s Feb 12 19:42:56.600152 kernel: raid6: avx2x2 gen() 16271 MB/s Feb 12 19:42:56.617143 kernel: raid6: avx2x2 xor() 14329 MB/s Feb 12 19:42:56.634162 kernel: raid6: avx2x1 gen() 14850 MB/s Feb 12 19:42:56.651143 kernel: raid6: avx2x1 xor() 13245 MB/s Feb 12 19:42:56.668152 kernel: raid6: sse2x4 gen() 8200 MB/s Feb 12 19:42:56.685155 kernel: raid6: sse2x4 xor() 4574 MB/s Feb 12 19:42:56.702160 kernel: raid6: sse2x2 gen() 8920 MB/s Feb 12 19:42:56.719136 kernel: raid6: sse2x2 xor() 6794 MB/s Feb 12 19:42:56.736183 kernel: raid6: sse2x1 gen() 8041 MB/s Feb 12 19:42:56.754024 kernel: raid6: sse2x1 xor() 5044 MB/s Feb 12 19:42:56.754167 kernel: raid6: using algorithm avx2x4 gen() 19485 MB/s Feb 12 19:42:56.754187 kernel: raid6: .... xor() 5365 MB/s, rmw enabled Feb 12 19:42:56.754932 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:42:56.776124 kernel: xor: automatically using best checksumming function avx Feb 12 19:42:56.929120 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:42:56.946072 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:42:56.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.948440 systemd[1]: Starting systemd-udevd.service... Feb 12 19:42:56.954636 kernel: audit: type=1130 audit(1707766976.946:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.947000 audit: BPF prog-id=7 op=LOAD Feb 12 19:42:56.947000 audit: BPF prog-id=8 op=LOAD Feb 12 19:42:56.974478 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 12 19:42:56.983547 systemd[1]: Started systemd-udevd.service. Feb 12 19:42:56.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:56.988867 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:42:57.012017 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 12 19:42:57.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:57.077116 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:42:57.079636 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:42:57.153869 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:42:57.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:57.243258 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 12 19:42:57.251414 kernel: scsi host0: Virtio SCSI HBA Feb 12 19:42:57.263166 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:42:57.263264 kernel: GPT:9289727 != 125829119 Feb 12 19:42:57.263284 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:42:57.263299 kernel: GPT:9289727 != 125829119 Feb 12 19:42:57.263315 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:42:57.263330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:42:57.282116 kernel: virtio_blk virtio5: [vdb] 1000 512-byte logical blocks (512 kB/500 KiB) Feb 12 19:42:57.282603 kernel: libata version 3.00 loaded. Feb 12 19:42:57.292165 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:42:57.302083 kernel: scsi host1: ata_piix Feb 12 19:42:57.304086 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:42:57.316352 kernel: scsi host2: ata_piix Feb 12 19:42:57.316670 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 12 19:42:57.316696 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 12 19:42:57.338095 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:42:57.353090 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) Feb 12 19:42:57.376101 kernel: AES CTR mode by8 optimization enabled Feb 12 19:42:57.384739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:42:57.392750 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:42:57.396023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:42:57.400893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:42:57.405600 kernel: ACPI: bus type USB registered Feb 12 19:42:57.405640 kernel: usbcore: registered new interface driver usbfs Feb 12 19:42:57.405660 kernel: usbcore: registered new interface driver hub Feb 12 19:42:57.405677 kernel: usbcore: registered new device driver usb Feb 12 19:42:57.408341 systemd[1]: Starting disk-uuid.service... Feb 12 19:42:57.426253 disk-uuid[472]: Primary Header is updated. Feb 12 19:42:57.426253 disk-uuid[472]: Secondary Entries is updated. Feb 12 19:42:57.426253 disk-uuid[472]: Secondary Header is updated. Feb 12 19:42:57.659897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:42:57.659946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:42:57.659964 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 12 19:42:57.659996 kernel: ehci-pci: EHCI PCI platform driver Feb 12 19:42:57.660013 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 12 19:42:57.660031 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 12 19:42:57.660270 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 12 19:42:57.660413 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 12 19:42:57.660550 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 12 19:42:57.660698 kernel: hub 1-0:1.0: USB hub found Feb 12 19:42:57.660879 kernel: hub 1-0:1.0: 2 ports detected Feb 12 19:42:57.426961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:42:58.451583 disk-uuid[474]: The operation has completed successfully. Feb 12 19:42:58.452562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:42:58.516143 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:42:58.516314 systemd[1]: Finished disk-uuid.service. Feb 12 19:42:58.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.523703 systemd[1]: Starting verity-setup.service... Feb 12 19:42:58.548107 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:42:58.640158 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:42:58.643436 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:42:58.648411 systemd[1]: Finished verity-setup.service. Feb 12 19:42:58.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.749070 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:42:58.749512 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:42:58.750119 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:42:58.751417 systemd[1]: Starting ignition-setup.service... Feb 12 19:42:58.753136 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:42:58.768937 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:42:58.769023 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:42:58.769084 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:42:58.797628 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:42:58.808200 systemd[1]: Finished ignition-setup.service. Feb 12 19:42:58.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.810655 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:42:58.927173 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:42:58.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.928000 audit: BPF prog-id=9 op=LOAD Feb 12 19:42:58.929959 systemd[1]: Starting systemd-networkd.service... Feb 12 19:42:58.970423 systemd-networkd[687]: lo: Link UP Feb 12 19:42:58.971556 systemd-networkd[687]: lo: Gained carrier Feb 12 19:42:58.972722 systemd-networkd[687]: Enumeration completed Feb 12 19:42:58.973791 systemd-networkd[687]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:42:58.976381 systemd[1]: Started systemd-networkd.service. Feb 12 19:42:58.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:58.976995 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 12 19:42:58.977240 systemd[1]: Reached target network.target. Feb 12 19:42:58.980158 systemd-networkd[687]: eth1: Link UP Feb 12 19:42:58.980165 systemd-networkd[687]: eth1: Gained carrier Feb 12 19:42:58.983585 systemd[1]: Starting iscsiuio.service... Feb 12 19:42:58.986801 systemd-networkd[687]: eth0: Link UP Feb 12 19:42:58.987363 systemd-networkd[687]: eth0: Gained carrier Feb 12 19:42:59.011252 systemd-networkd[687]: eth0: DHCPv4 address 64.23.173.239/20, gateway 64.23.160.1 acquired from 169.254.169.253 Feb 12 19:42:59.014897 systemd[1]: Started iscsiuio.service. Feb 12 19:42:59.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.018033 systemd-networkd[687]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Feb 12 19:42:59.020073 systemd[1]: Starting iscsid.service... Feb 12 19:42:59.026263 iscsid[692]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:42:59.026263 iscsid[692]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:42:59.026263 iscsid[692]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:42:59.026263 iscsid[692]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:42:59.026263 iscsid[692]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:42:59.026263 iscsid[692]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:42:59.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.031336 systemd[1]: Started iscsid.service. Feb 12 19:42:59.042121 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:42:59.061976 ignition[619]: Ignition 2.14.0 Feb 12 19:42:59.061995 ignition[619]: Stage: fetch-offline Feb 12 19:42:59.062122 ignition[619]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.062162 ignition[619]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.068270 ignition[619]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.068486 ignition[619]: parsed url from cmdline: "" Feb 12 19:42:59.068493 ignition[619]: no config URL provided Feb 12 19:42:59.068502 ignition[619]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:42:59.068517 ignition[619]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:42:59.068525 ignition[619]: failed to fetch config: resource requires networking Feb 12 19:42:59.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.068869 ignition[619]: Ignition finished successfully Feb 12 19:42:59.072307 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:42:59.074037 systemd[1]: Starting ignition-fetch.service... Feb 12 19:42:59.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.079989 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:42:59.080762 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:42:59.081346 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:42:59.081774 systemd[1]: Reached target remote-fs.target. Feb 12 19:42:59.085342 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:42:59.096857 ignition[701]: Ignition 2.14.0 Feb 12 19:42:59.096872 ignition[701]: Stage: fetch Feb 12 19:42:59.097074 ignition[701]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.097106 ignition[701]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.101086 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.101218 ignition[701]: parsed url from cmdline: "" Feb 12 19:42:59.101224 ignition[701]: no config URL provided Feb 12 19:42:59.101233 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:42:59.101246 ignition[701]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:42:59.101291 ignition[701]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 12 19:42:59.104519 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:42:59.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.127097 ignition[701]: GET result: OK Feb 12 19:42:59.127260 ignition[701]: parsing config with SHA512: c3f445f9589e5f04a23071d53fca48802805237cf5efb4d49cb699bb2ffcdc5bc334425a686c23a4db449e313dacf41c3fc6d0fe6a017fb7ae4e5754023b84df Feb 12 19:42:59.191573 unknown[701]: fetched base config from "system" Feb 12 19:42:59.191589 unknown[701]: fetched base config from "system" Feb 12 19:42:59.191599 unknown[701]: fetched user config from "digitalocean" Feb 12 19:42:59.192397 ignition[701]: fetch: fetch complete Feb 12 19:42:59.192406 ignition[701]: fetch: fetch passed Feb 12 19:42:59.192474 ignition[701]: Ignition finished successfully Feb 12 19:42:59.194698 systemd[1]: Finished ignition-fetch.service. Feb 12 19:42:59.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.196595 systemd[1]: Starting ignition-kargs.service... Feb 12 19:42:59.214474 ignition[713]: Ignition 2.14.0 Feb 12 19:42:59.214489 ignition[713]: Stage: kargs Feb 12 19:42:59.214727 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.214757 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.217847 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.224989 ignition[713]: kargs: kargs passed Feb 12 19:42:59.225111 ignition[713]: Ignition finished successfully Feb 12 19:42:59.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.227380 systemd[1]: Finished ignition-kargs.service. Feb 12 19:42:59.229540 systemd[1]: Starting ignition-disks.service... Feb 12 19:42:59.244692 ignition[719]: Ignition 2.14.0 Feb 12 19:42:59.244706 ignition[719]: Stage: disks Feb 12 19:42:59.244882 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.244902 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.247759 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.250436 ignition[719]: disks: disks passed Feb 12 19:42:59.250566 ignition[719]: Ignition finished successfully Feb 12 19:42:59.251706 systemd[1]: Finished ignition-disks.service. Feb 12 19:42:59.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.252577 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:42:59.253607 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:42:59.254783 systemd[1]: Reached target local-fs.target. Feb 12 19:42:59.255512 systemd[1]: Reached target sysinit.target. Feb 12 19:42:59.256274 systemd[1]: Reached target basic.target. Feb 12 19:42:59.258139 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:42:59.281514 systemd-fsck[727]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:42:59.287410 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:42:59.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.289388 systemd[1]: Mounting sysroot.mount... Feb 12 19:42:59.305423 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:42:59.306619 systemd[1]: Mounted sysroot.mount. Feb 12 19:42:59.307801 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:42:59.311081 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:42:59.313471 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 12 19:42:59.316716 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:42:59.317971 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:42:59.319253 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:42:59.322030 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:42:59.324797 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:42:59.334767 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:42:59.351797 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:42:59.367663 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:42:59.386686 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:42:59.473528 coreos-metadata[734]: Feb 12 19:42:59.473 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:42:59.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.482976 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:42:59.484648 systemd[1]: Starting ignition-mount.service... Feb 12 19:42:59.488215 systemd[1]: Starting sysroot-boot.service... Feb 12 19:42:59.492215 coreos-metadata[734]: Feb 12 19:42:59.492 INFO Fetch successful Feb 12 19:42:59.499904 coreos-metadata[734]: Feb 12 19:42:59.499 INFO wrote hostname ci-3510.3.2-4-a1ae76f648 to /sysroot/etc/hostname Feb 12 19:42:59.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.502582 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:42:59.513592 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:42:59.522884 coreos-metadata[733]: Feb 12 19:42:59.522 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:42:59.532973 ignition[786]: INFO : Ignition 2.14.0 Feb 12 19:42:59.532973 ignition[786]: INFO : Stage: mount Feb 12 19:42:59.534559 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.534559 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.536595 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.539193 coreos-metadata[733]: Feb 12 19:42:59.539 INFO Fetch successful Feb 12 19:42:59.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.548377 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 12 19:42:59.548523 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 12 19:42:59.550754 ignition[786]: INFO : mount: mount passed Feb 12 19:42:59.551187 ignition[786]: INFO : Ignition finished successfully Feb 12 19:42:59.552943 systemd[1]: Finished ignition-mount.service. Feb 12 19:42:59.553714 systemd[1]: Finished sysroot-boot.service. Feb 12 19:42:59.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:59.671179 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:42:59.684727 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (794) Feb 12 19:42:59.697847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:42:59.697940 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:42:59.697961 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:42:59.704817 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:42:59.707287 systemd[1]: Starting ignition-files.service... Feb 12 19:42:59.738649 ignition[814]: INFO : Ignition 2.14.0 Feb 12 19:42:59.738649 ignition[814]: INFO : Stage: files Feb 12 19:42:59.740149 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:42:59.740149 ignition[814]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:42:59.741914 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:42:59.751732 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:42:59.754803 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:42:59.754803 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:42:59.763143 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:42:59.764573 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:42:59.767559 unknown[814]: wrote ssh authorized keys file for user: core Feb 12 19:42:59.769446 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:42:59.769446 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:42:59.774009 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:42:59.851362 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:42:59.942699 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:42:59.943904 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:42:59.945002 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:00.471046 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:43:00.610341 systemd-networkd[687]: eth0: Gained IPv6LL Feb 12 19:43:00.662528 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:43:00.663985 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:43:00.663985 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:43:00.663985 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:43:00.738594 systemd-networkd[687]: eth1: Gained IPv6LL Feb 12 19:43:01.164837 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:43:01.721580 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:43:01.721580 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:43:01.721580 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:01.721580 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:01.731679 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:01.731679 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:43:01.822080 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:43:02.222602 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 19:43:02.224378 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:02.224378 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:02.224378 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:43:02.285951 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:43:03.351302 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:43:03.353034 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:03.353034 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:03.353034 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:43:03.399672 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:43:03.697393 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:43:03.697393 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:03.700353 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:03.700353 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:04.133013 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:43:04.203031 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:04.205158 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:04.205158 ignition[814]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:43:04.205158 ignition[814]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:43:04.205158 ignition[814]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:43:04.218703 ignition[814]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:43:04.261563 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 12 19:43:04.261632 kernel: audit: type=1130 audit(1707766984.221:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.261654 kernel: audit: type=1130 audit(1707766984.257:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.261848 ignition[814]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:04.261848 ignition[814]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:04.261848 ignition[814]: INFO : files: files passed Feb 12 19:43:04.261848 ignition[814]: INFO : Ignition finished successfully Feb 12 19:43:04.280968 kernel: audit: type=1130 audit(1707766984.261:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.280999 kernel: audit: type=1131 audit(1707766984.261:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.218875 systemd[1]: Finished ignition-files.service. Feb 12 19:43:04.225166 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:43:04.283030 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:43:04.247439 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:43:04.248843 systemd[1]: Starting ignition-quench.service... Feb 12 19:43:04.256513 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:43:04.257675 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:43:04.257807 systemd[1]: Finished ignition-quench.service. Feb 12 19:43:04.262208 systemd[1]: Reached target ignition-complete.target. Feb 12 19:43:04.275354 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:43:04.296527 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:43:04.296698 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:43:04.304025 kernel: audit: type=1130 audit(1707766984.297:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.304092 kernel: audit: type=1131 audit(1707766984.297:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.298014 systemd[1]: Reached target initrd-fs.target. Feb 12 19:43:04.304361 systemd[1]: Reached target initrd.target. Feb 12 19:43:04.305110 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:43:04.306632 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:43:04.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.323815 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:43:04.328119 kernel: audit: type=1130 audit(1707766984.324:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.328667 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:43:04.340523 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:43:04.341706 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:43:04.342781 systemd[1]: Stopped target timers.target. Feb 12 19:43:04.343687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:43:04.344352 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:43:04.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.345543 systemd[1]: Stopped target initrd.target. Feb 12 19:43:04.349117 kernel: audit: type=1131 audit(1707766984.345:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.349694 systemd[1]: Stopped target basic.target. Feb 12 19:43:04.351153 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:43:04.352186 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:43:04.353142 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:43:04.354231 systemd[1]: Stopped target remote-fs.target. Feb 12 19:43:04.355400 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:43:04.356434 systemd[1]: Stopped target sysinit.target. Feb 12 19:43:04.357800 systemd[1]: Stopped target local-fs.target. Feb 12 19:43:04.358907 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:43:04.360028 systemd[1]: Stopped target swap.target. Feb 12 19:43:04.361167 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:43:04.361839 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:43:04.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.363309 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:43:04.366142 kernel: audit: type=1131 audit(1707766984.362:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.366909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:43:04.367122 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:43:04.368506 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:43:04.372906 kernel: audit: type=1131 audit(1707766984.368:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.368707 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:43:04.373511 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:43:04.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.373689 systemd[1]: Stopped ignition-files.service. Feb 12 19:43:04.374579 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:43:04.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.374741 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:43:04.377066 systemd[1]: Stopping ignition-mount.service... Feb 12 19:43:04.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.379753 systemd[1]: Stopping iscsiuio.service... Feb 12 19:43:04.384414 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:43:04.384854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:43:04.385107 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:43:04.397305 ignition[852]: INFO : Ignition 2.14.0 Feb 12 19:43:04.397305 ignition[852]: INFO : Stage: umount Feb 12 19:43:04.397305 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:04.397305 ignition[852]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:43:04.388544 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:43:04.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.406778 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:43:04.406778 ignition[852]: INFO : umount: umount passed Feb 12 19:43:04.406778 ignition[852]: INFO : Ignition finished successfully Feb 12 19:43:04.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.388712 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:43:04.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.391020 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:43:04.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.393335 systemd[1]: Stopped iscsiuio.service. Feb 12 19:43:04.406690 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:43:04.406831 systemd[1]: Stopped ignition-mount.service. Feb 12 19:43:04.408520 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:43:04.408674 systemd[1]: Stopped ignition-disks.service. Feb 12 19:43:04.411561 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:43:04.411626 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:43:04.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.413402 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:43:04.413486 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:43:04.414083 systemd[1]: Stopped target network.target. Feb 12 19:43:04.419243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:43:04.419358 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:43:04.419932 systemd[1]: Stopped target paths.target. Feb 12 19:43:04.420439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:43:04.422788 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:43:04.423408 systemd[1]: Stopped target slices.target. Feb 12 19:43:04.423903 systemd[1]: Stopped target sockets.target. Feb 12 19:43:04.424809 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:43:04.424861 systemd[1]: Closed iscsid.socket. Feb 12 19:43:04.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.425524 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:43:04.425577 systemd[1]: Closed iscsiuio.socket. Feb 12 19:43:04.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.433950 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:43:04.434042 systemd[1]: Stopped ignition-setup.service. Feb 12 19:43:04.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.435066 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:43:04.435904 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:43:04.438511 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:43:04.447000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:43:04.440873 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:43:04.441083 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:43:04.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.441114 systemd-networkd[687]: eth0: DHCPv6 lease lost Feb 12 19:43:04.442846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:43:04.442941 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:43:04.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.444143 systemd-networkd[687]: eth1: DHCPv6 lease lost Feb 12 19:43:04.452000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:43:04.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.444324 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:43:04.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.444457 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:43:04.445276 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:43:04.445389 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:43:04.447783 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:43:04.447828 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:43:04.448538 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:43:04.448599 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:43:04.450605 systemd[1]: Stopping network-cleanup.service... Feb 12 19:43:04.451168 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:43:04.451254 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:43:04.452018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:43:04.452107 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:43:04.452892 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:43:04.452943 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:43:04.453567 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:43:04.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.460699 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:43:04.461541 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:43:04.461833 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:43:04.464120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:43:04.464205 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:43:04.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.465249 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:43:04.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.465297 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:43:04.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.466042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:43:04.466135 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:43:04.467354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:43:04.467411 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:43:04.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.468124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:43:04.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.468165 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:43:04.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.473005 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:43:04.474014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:43:04.474147 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:43:04.475339 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:43:04.475411 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:43:04.476428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:43:04.476476 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:43:04.478708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:43:04.479435 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:43:04.479576 systemd[1]: Stopped network-cleanup.service. Feb 12 19:43:04.484022 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:43:04.484192 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:43:04.484904 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:43:04.486893 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:43:04.504145 systemd[1]: Switching root. Feb 12 19:43:04.525043 systemd-journald[185]: Journal stopped Feb 12 19:43:09.500816 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 12 19:43:09.500892 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:43:09.500934 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:43:09.500951 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:43:09.500968 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:43:09.500984 kernel: SELinux: policy capability open_perms=1 Feb 12 19:43:09.501011 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:43:09.501034 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:43:09.501488 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:43:09.501540 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:43:09.501565 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:43:09.501578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:43:09.501593 systemd[1]: Successfully loaded SELinux policy in 54.347ms. Feb 12 19:43:09.501621 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.077ms. Feb 12 19:43:09.501649 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:43:09.501665 systemd[1]: Detected virtualization kvm. Feb 12 19:43:09.501678 systemd[1]: Detected architecture x86-64. Feb 12 19:43:09.501694 systemd[1]: Detected first boot. Feb 12 19:43:09.501711 systemd[1]: Hostname set to . Feb 12 19:43:09.501728 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:43:09.501747 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:43:09.501764 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:43:09.501789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:09.501807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:09.501840 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:09.501854 kernel: kauditd_printk_skb: 52 callbacks suppressed Feb 12 19:43:09.501865 kernel: audit: type=1334 audit(1707766989.218:93): prog-id=14 op=LOAD Feb 12 19:43:09.501876 kernel: audit: type=1334 audit(1707766989.222:94): prog-id=4 op=UNLOAD Feb 12 19:43:09.501888 kernel: audit: type=1334 audit(1707766989.222:95): prog-id=5 op=UNLOAD Feb 12 19:43:09.501899 kernel: audit: type=1334 audit(1707766989.229:96): prog-id=15 op=LOAD Feb 12 19:43:09.501916 kernel: audit: type=1334 audit(1707766989.229:97): prog-id=12 op=UNLOAD Feb 12 19:43:09.501927 kernel: audit: type=1334 audit(1707766989.231:98): prog-id=16 op=LOAD Feb 12 19:43:09.501939 kernel: audit: type=1334 audit(1707766989.233:99): prog-id=17 op=LOAD Feb 12 19:43:09.501951 kernel: audit: type=1334 audit(1707766989.233:100): prog-id=13 op=UNLOAD Feb 12 19:43:09.501966 kernel: audit: type=1334 audit(1707766989.233:101): prog-id=14 op=UNLOAD Feb 12 19:43:09.501983 kernel: audit: type=1131 audit(1707766989.236:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.501999 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:43:09.502023 systemd[1]: Stopped iscsid.service. Feb 12 19:43:09.502036 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:43:09.502106 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:43:09.502120 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:09.502132 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:43:09.502168 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:43:09.502186 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 19:43:09.502211 systemd[1]: Created slice system-getty.slice. Feb 12 19:43:09.502376 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:43:09.502402 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:43:09.502418 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:43:09.502436 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:43:09.502455 systemd[1]: Created slice user.slice. Feb 12 19:43:09.502473 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:43:09.502491 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:43:09.502516 systemd[1]: Set up automount boot.automount. Feb 12 19:43:09.502534 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:43:09.502564 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:43:09.502606 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:43:09.502628 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:43:09.502659 systemd[1]: Reached target integritysetup.target. Feb 12 19:43:09.502699 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:43:09.502721 systemd[1]: Reached target remote-fs.target. Feb 12 19:43:09.502742 systemd[1]: Reached target slices.target. Feb 12 19:43:09.502761 systemd[1]: Reached target swap.target. Feb 12 19:43:09.502789 systemd[1]: Reached target torcx.target. Feb 12 19:43:09.502824 systemd[1]: Reached target veritysetup.target. Feb 12 19:43:09.502839 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:43:09.502853 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:43:09.502873 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:43:09.502893 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:43:09.502912 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:43:09.502932 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:43:09.502950 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:43:09.502972 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:43:09.503001 systemd[1]: Mounting media.mount... Feb 12 19:43:09.503023 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:09.503044 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:43:09.503099 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:43:09.503140 systemd[1]: Mounting tmp.mount... Feb 12 19:43:09.503163 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:43:09.503181 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:43:09.503196 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:43:09.503211 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:43:09.503234 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:43:09.503252 systemd[1]: Starting modprobe@drm.service... Feb 12 19:43:09.503270 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:43:09.503290 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:43:09.503312 systemd[1]: Starting modprobe@loop.service... Feb 12 19:43:09.503334 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:43:09.503357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:43:09.503404 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:43:09.503430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:43:09.503452 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:43:09.503473 systemd[1]: Stopped systemd-journald.service. Feb 12 19:43:09.503495 systemd[1]: Starting systemd-journald.service... Feb 12 19:43:09.503531 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:43:09.503553 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:43:09.503597 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:43:09.503631 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:43:09.503670 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:43:09.503693 systemd[1]: Stopped verity-setup.service. Feb 12 19:43:09.503716 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:09.503739 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:43:09.503763 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:43:09.503786 systemd[1]: Mounted media.mount. Feb 12 19:43:09.503810 kernel: loop: module loaded Feb 12 19:43:09.503832 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:43:09.503869 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:43:09.503891 systemd[1]: Mounted tmp.mount. Feb 12 19:43:09.503915 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:43:09.503937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:43:09.503959 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:43:09.504010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:43:09.504035 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:43:09.504079 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:43:09.504101 systemd[1]: Finished modprobe@drm.service. Feb 12 19:43:09.504123 kernel: fuse: init (API version 7.34) Feb 12 19:43:09.504145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:43:09.504177 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:43:09.504221 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:43:09.505644 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:43:09.505691 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:43:09.505726 systemd[1]: Finished modprobe@loop.service. Feb 12 19:43:09.505753 systemd-journald[954]: Journal started Feb 12 19:43:09.505833 systemd-journald[954]: Runtime Journal (/run/log/journal/efdce518bc0949d0bc924b2cb39f7188) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:43:04.697000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:43:04.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:04.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:04.768000 audit: BPF prog-id=10 op=LOAD Feb 12 19:43:04.768000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:43:04.768000 audit: BPF prog-id=11 op=LOAD Feb 12 19:43:04.768000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:43:04.899000 audit[884]: AVC avc: denied { associate } for pid=884 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:43:04.899000 audit[884]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=867 pid=884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:04.899000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:43:04.901000 audit[884]: AVC avc: denied { associate } for pid=884 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:43:04.901000 audit[884]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=867 pid=884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:04.901000 audit: CWD cwd="/" Feb 12 19:43:04.901000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:04.901000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:04.901000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:43:09.210000 audit: BPF prog-id=12 op=LOAD Feb 12 19:43:09.210000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:43:09.210000 audit: BPF prog-id=13 op=LOAD Feb 12 19:43:09.218000 audit: BPF prog-id=14 op=LOAD Feb 12 19:43:09.222000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:43:09.222000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:43:09.229000 audit: BPF prog-id=15 op=LOAD Feb 12 19:43:09.229000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:43:09.231000 audit: BPF prog-id=16 op=LOAD Feb 12 19:43:09.233000 audit: BPF prog-id=17 op=LOAD Feb 12 19:43:09.233000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:43:09.233000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:43:09.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.249000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:43:09.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.419000 audit: BPF prog-id=18 op=LOAD Feb 12 19:43:09.419000 audit: BPF prog-id=19 op=LOAD Feb 12 19:43:09.419000 audit: BPF prog-id=20 op=LOAD Feb 12 19:43:09.419000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:43:09.419000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:43:09.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.508125 systemd[1]: Started systemd-journald.service. Feb 12 19:43:09.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.498000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:43:09.498000 audit[954]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdaa738580 a2=4000 a3=7ffdaa73861c items=0 ppid=1 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:09.498000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:43:09.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.206084 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:43:04.894867 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:09.206106 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:43:04.895550 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:43:09.235758 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:43:04.895578 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:43:04.895626 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:43:04.895642 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:43:04.895702 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:43:04.895721 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:43:04.896028 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:43:09.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:04.896119 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:43:09.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.511485 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:43:04.896143 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:43:09.512319 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:43:04.898854 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:43:09.513265 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:43:04.898897 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:43:09.514579 systemd[1]: Reached target network-pre.target. Feb 12 19:43:04.898919 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:43:04.898934 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:43:04.898954 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:43:04.898968 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:43:08.430709 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:08.431124 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:08.431365 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:08.431872 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:08.431949 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:43:08.432038 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2024-02-12T19:43:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:43:09.517259 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:43:09.524749 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:43:09.530170 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:43:09.536646 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:43:09.541740 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:43:09.542499 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:43:09.553626 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:43:09.554454 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:43:09.557846 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:43:09.565727 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:43:09.568992 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:43:09.573304 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:43:09.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.574191 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:43:09.589950 systemd-journald[954]: Time spent on flushing to /var/log/journal/efdce518bc0949d0bc924b2cb39f7188 is 95.677ms for 1196 entries. Feb 12 19:43:09.589950 systemd-journald[954]: System Journal (/var/log/journal/efdce518bc0949d0bc924b2cb39f7188) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:43:09.705338 systemd-journald[954]: Received client request to flush runtime journal. Feb 12 19:43:09.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.624182 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:43:09.675309 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:43:09.677667 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:43:09.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.707280 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:43:09.732398 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:43:09.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.733478 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:43:09.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.736146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:43:09.740390 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:43:09.771889 udevadm[997]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:43:09.825165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:43:09.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:10.746000 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:43:10.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:10.747000 audit: BPF prog-id=21 op=LOAD Feb 12 19:43:10.747000 audit: BPF prog-id=22 op=LOAD Feb 12 19:43:10.747000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:43:10.747000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:43:10.749528 systemd[1]: Starting systemd-udevd.service... Feb 12 19:43:10.786604 systemd-udevd[998]: Using default interface naming scheme 'v252'. Feb 12 19:43:10.848027 systemd[1]: Started systemd-udevd.service. Feb 12 19:43:10.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:10.853000 audit: BPF prog-id=23 op=LOAD Feb 12 19:43:10.856479 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:10.884000 audit: BPF prog-id=24 op=LOAD Feb 12 19:43:10.884000 audit: BPF prog-id=25 op=LOAD Feb 12 19:43:10.884000 audit: BPF prog-id=26 op=LOAD Feb 12 19:43:10.886236 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:43:10.944598 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:43:10.964353 systemd[1]: Started systemd-userdbd.service. Feb 12 19:43:10.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.011736 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:11.012072 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:43:11.014702 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:43:11.019325 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:43:11.070367 systemd[1]: Starting modprobe@loop.service... Feb 12 19:43:11.071023 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:43:11.071156 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:43:11.071341 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:11.072352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:43:11.072594 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:43:11.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.073646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:43:11.073859 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:43:11.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.115304 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:43:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.124959 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:43:11.125174 systemd[1]: Finished modprobe@loop.service. Feb 12 19:43:11.126038 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:43:11.203812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:43:11.204209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:43:11.209126 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:43:11.223987 systemd-networkd[1004]: lo: Link UP Feb 12 19:43:11.224001 systemd-networkd[1004]: lo: Gained carrier Feb 12 19:43:11.225110 systemd-networkd[1004]: Enumeration completed Feb 12 19:43:11.225268 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:11.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.226910 systemd-networkd[1004]: eth1: Configuring with /run/systemd/network/10-be:2f:53:22:d1:66.network. Feb 12 19:43:11.232259 systemd-networkd[1004]: eth0: Configuring with /run/systemd/network/10-ca:ec:a4:30:4e:82.network. Feb 12 19:43:11.233507 systemd-networkd[1004]: eth1: Link UP Feb 12 19:43:11.233519 systemd-networkd[1004]: eth1: Gained carrier Feb 12 19:43:11.240537 systemd-networkd[1004]: eth0: Link UP Feb 12 19:43:11.240552 systemd-networkd[1004]: eth0: Gained carrier Feb 12 19:43:11.252000 audit[1008]: AVC avc: denied { confidentiality } for pid=1008 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:43:11.252000 audit[1008]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5593b1a37240 a1=32194 a2=7f88d11efbc5 a3=5 items=108 ppid=998 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:11.252000 audit: CWD cwd="/" Feb 12 19:43:11.252000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=1 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=2 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=3 name=(null) inode=13667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=4 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=5 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=6 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=7 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=8 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=9 name=(null) inode=13670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=10 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=11 name=(null) inode=13671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=12 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=13 name=(null) inode=13672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=14 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=15 name=(null) inode=13673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=16 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=17 name=(null) inode=13674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=18 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=19 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=20 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=21 name=(null) inode=13676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=22 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=23 name=(null) inode=13677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=24 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=25 name=(null) inode=13678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=26 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=27 name=(null) inode=13679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=28 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=29 name=(null) inode=13680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=30 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=31 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=32 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=33 name=(null) inode=13682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=34 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=35 name=(null) inode=13683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=36 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=37 name=(null) inode=13684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=38 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=39 name=(null) inode=13685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=40 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=41 name=(null) inode=13686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=42 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=43 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=44 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=45 name=(null) inode=13688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=46 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=47 name=(null) inode=13689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=48 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=49 name=(null) inode=13690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=50 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=51 name=(null) inode=13691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=52 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=53 name=(null) inode=13692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=55 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=56 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=57 name=(null) inode=13694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=58 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=59 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=60 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=61 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=62 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=63 name=(null) inode=13697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=64 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=65 name=(null) inode=13698 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=66 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=67 name=(null) inode=13699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=68 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=69 name=(null) inode=13700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=70 name=(null) inode=13696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=71 name=(null) inode=13701 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=72 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=73 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=74 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=75 name=(null) inode=13703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=76 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=77 name=(null) inode=13704 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=78 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=79 name=(null) inode=13705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=80 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=81 name=(null) inode=13706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=82 name=(null) inode=13702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=83 name=(null) inode=13707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=84 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=85 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=86 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=87 name=(null) inode=13709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=88 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=89 name=(null) inode=13710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=90 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=91 name=(null) inode=13711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=92 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=93 name=(null) inode=13712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=94 name=(null) inode=13708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=95 name=(null) inode=13713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=96 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=97 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=98 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=99 name=(null) inode=13715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=100 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=101 name=(null) inode=13716 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=102 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=103 name=(null) inode=13717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=104 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=105 name=(null) inode=13718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=106 name=(null) inode=13714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PATH item=107 name=(null) inode=13719 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:11.252000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:43:11.280081 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:43:11.285092 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 19:43:11.337108 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:43:11.492087 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:43:11.516751 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:43:11.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.519713 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:43:11.551169 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:11.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.589044 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:43:11.589962 systemd[1]: Reached target cryptsetup.target. Feb 12 19:43:11.593347 systemd[1]: Starting lvm2-activation.service... Feb 12 19:43:11.603822 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:11.639931 systemd[1]: Finished lvm2-activation.service. Feb 12 19:43:11.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.640828 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:43:11.644740 systemd[1]: Mounting media-configdrive.mount... Feb 12 19:43:11.645492 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:43:11.645573 systemd[1]: Reached target machines.target. Feb 12 19:43:11.647833 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:43:11.665808 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:43:11.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.671078 kernel: ISO 9660 Extensions: RRIP_1991A Feb 12 19:43:11.673660 systemd[1]: Mounted media-configdrive.mount. Feb 12 19:43:11.674702 systemd[1]: Reached target local-fs.target. Feb 12 19:43:11.677463 systemd[1]: Starting ldconfig.service... Feb 12 19:43:11.679393 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:43:11.679484 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:11.681608 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:43:11.685495 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:43:11.687946 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:11.688301 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:11.691204 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:43:11.702831 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1043 (bootctl) Feb 12 19:43:11.705005 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:43:11.737151 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:43:11.749746 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:43:11.752748 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:43:11.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:11.985832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:43:11.988073 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:43:12.052706 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) Feb 12 19:43:12.052706 systemd-fsck[1049]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 19:43:12.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.056608 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:43:12.060351 systemd[1]: Mounting boot.mount... Feb 12 19:43:12.085582 systemd[1]: Mounted boot.mount. Feb 12 19:43:12.151637 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:43:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.326672 systemd-networkd[1004]: eth1: Gained IPv6LL Feb 12 19:43:12.381354 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:43:12.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.384850 systemd[1]: Starting audit-rules.service... Feb 12 19:43:12.387475 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:43:12.391962 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:43:12.396000 audit: BPF prog-id=27 op=LOAD Feb 12 19:43:12.402744 systemd[1]: Starting systemd-resolved.service... Feb 12 19:43:12.405000 audit: BPF prog-id=28 op=LOAD Feb 12 19:43:12.409444 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:43:12.418633 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:43:12.443392 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:43:12.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.444464 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:43:12.453000 audit[1058]: SYSTEM_BOOT pid=1058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.456202 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:43:12.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.527247 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:43:12.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.622545 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:43:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.625118 systemd[1]: Reached target time-set.target. Feb 12 19:43:12.637389 augenrules[1073]: No rules Feb 12 19:43:12.637000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:43:12.637000 audit[1073]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe307b2760 a2=420 a3=0 items=0 ppid=1052 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:12.637000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:43:12.642122 systemd[1]: Finished audit-rules.service. Feb 12 19:43:12.677476 systemd-resolved[1056]: Positive Trust Anchors: Feb 12 19:43:12.677500 systemd-resolved[1056]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:43:12.677555 systemd-resolved[1056]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:43:12.698165 systemd-resolved[1056]: Using system hostname 'ci-3510.3.2-4-a1ae76f648'. Feb 12 19:43:13.099968 systemd-timesyncd[1057]: Contacted time server 45.33.103.94:123 (0.flatcar.pool.ntp.org). Feb 12 19:43:13.100462 systemd-timesyncd[1057]: Initial clock synchronization to Mon 2024-02-12 19:43:13.099442 UTC. Feb 12 19:43:13.104110 systemd[1]: Started systemd-resolved.service. Feb 12 19:43:13.106266 systemd[1]: Reached target network.target. Feb 12 19:43:13.107222 systemd[1]: Reached target nss-lookup.target. Feb 12 19:43:13.220025 ldconfig[1042]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:43:13.233154 systemd[1]: Finished ldconfig.service. Feb 12 19:43:13.236488 systemd[1]: Starting systemd-update-done.service... Feb 12 19:43:13.237392 systemd-networkd[1004]: eth0: Gained IPv6LL Feb 12 19:43:13.253078 systemd[1]: Finished systemd-update-done.service. Feb 12 19:43:13.256042 systemd[1]: Reached target sysinit.target. Feb 12 19:43:13.256600 systemd[1]: Started motdgen.path. Feb 12 19:43:13.257182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:43:13.258073 systemd[1]: Started logrotate.timer. Feb 12 19:43:13.258717 systemd[1]: Started mdadm.timer. Feb 12 19:43:13.259858 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:43:13.260503 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:43:13.260551 systemd[1]: Reached target paths.target. Feb 12 19:43:13.261058 systemd[1]: Reached target timers.target. Feb 12 19:43:13.264089 systemd[1]: Listening on dbus.socket. Feb 12 19:43:13.266784 systemd[1]: Starting docker.socket... Feb 12 19:43:13.277512 systemd[1]: Listening on sshd.socket. Feb 12 19:43:13.278354 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:13.288065 systemd[1]: Listening on docker.socket. Feb 12 19:43:13.289765 systemd[1]: Reached target sockets.target. Feb 12 19:43:13.290346 systemd[1]: Reached target basic.target. Feb 12 19:43:13.291638 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:13.291842 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:13.294576 systemd[1]: Starting containerd.service... Feb 12 19:43:13.297130 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 19:43:13.304715 systemd[1]: Starting dbus.service... Feb 12 19:43:13.309561 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:43:13.314676 systemd[1]: Starting extend-filesystems.service... Feb 12 19:43:13.316324 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:43:13.318704 systemd[1]: Starting motdgen.service... Feb 12 19:43:13.325311 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:43:13.331984 systemd[1]: Starting prepare-critools.service... Feb 12 19:43:13.339065 systemd[1]: Starting prepare-helm.service... Feb 12 19:43:13.342634 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:43:13.348194 systemd[1]: Starting sshd-keygen.service... Feb 12 19:43:13.357116 systemd[1]: Starting systemd-logind.service... Feb 12 19:43:13.357829 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:13.357917 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:43:13.358761 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:43:13.360529 systemd[1]: Starting update-engine.service... Feb 12 19:43:13.367130 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:43:13.384116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:43:13.385191 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:43:13.400413 tar[1101]: ./ Feb 12 19:43:13.400964 tar[1101]: ./macvlan Feb 12 19:43:13.421130 jq[1099]: true Feb 12 19:43:13.426703 extend-filesystems[1087]: Found vda Feb 12 19:43:13.427931 jq[1086]: false Feb 12 19:43:13.433905 tar[1110]: linux-amd64/helm Feb 12 19:43:13.440479 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:43:13.440877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:43:13.442633 extend-filesystems[1087]: Found vda1 Feb 12 19:43:13.446779 extend-filesystems[1087]: Found vda2 Feb 12 19:43:13.450788 extend-filesystems[1087]: Found vda3 Feb 12 19:43:13.450788 extend-filesystems[1087]: Found usr Feb 12 19:43:13.450788 extend-filesystems[1087]: Found vda4 Feb 12 19:43:13.450788 extend-filesystems[1087]: Found vda6 Feb 12 19:43:13.450788 extend-filesystems[1087]: Found vda7 Feb 12 19:43:13.450788 extend-filesystems[1087]: Found vda9 Feb 12 19:43:13.450788 extend-filesystems[1087]: Checking size of /dev/vda9 Feb 12 19:43:13.509927 tar[1103]: crictl Feb 12 19:43:13.510405 jq[1115]: true Feb 12 19:43:13.551383 dbus-daemon[1085]: [system] SELinux support is enabled Feb 12 19:43:13.556554 systemd[1]: Started dbus.service. Feb 12 19:43:13.560682 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:43:13.560940 systemd[1]: Finished motdgen.service. Feb 12 19:43:13.569012 systemd[1]: Created slice system-sshd.slice. Feb 12 19:43:13.569718 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:43:13.569782 systemd[1]: Reached target system-config.target. Feb 12 19:43:13.572004 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:43:13.575241 systemd[1]: Starting user-configdrive.service... Feb 12 19:43:13.585254 extend-filesystems[1087]: Resized partition /dev/vda9 Feb 12 19:43:13.619431 extend-filesystems[1136]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:43:13.649256 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 12 19:43:13.685801 update_engine[1098]: I0212 19:43:13.684397 1098 main.cc:92] Flatcar Update Engine starting Feb 12 19:43:13.696422 systemd[1]: Started update-engine.service. Feb 12 19:43:13.700153 systemd[1]: Started locksmithd.service. Feb 12 19:43:13.705324 update_engine[1098]: I0212 19:43:13.705266 1098 update_check_scheduler.cc:74] Next update check in 5m8s Feb 12 19:43:13.705649 coreos-cloudinit[1133]: 2024/02/12 19:43:13 Checking availability of "cloud-drive" Feb 12 19:43:13.706278 coreos-cloudinit[1133]: 2024/02/12 19:43:13 Fetching user-data from datasource of type "cloud-drive" Feb 12 19:43:13.706278 coreos-cloudinit[1133]: 2024/02/12 19:43:13 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 12 19:43:13.709765 coreos-cloudinit[1133]: 2024/02/12 19:43:13 Fetching meta-data from datasource of type "cloud-drive" Feb 12 19:43:13.709765 coreos-cloudinit[1133]: 2024/02/12 19:43:13 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 12 19:43:13.758666 coreos-cloudinit[1133]: Detected an Ignition config. Exiting... Feb 12 19:43:13.759346 systemd[1]: Finished user-configdrive.service. Feb 12 19:43:13.760025 systemd[1]: Reached target user-config.target. Feb 12 19:43:13.802567 env[1106]: time="2024-02-12T19:43:13.802441866Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:43:13.869711 tar[1101]: ./static Feb 12 19:43:13.883350 systemd-logind[1097]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:43:13.883908 bash[1148]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:43:13.883894 systemd-logind[1097]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:43:13.887152 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:43:13.895995 systemd-logind[1097]: New seat seat0. Feb 12 19:43:13.907824 systemd[1]: Started systemd-logind.service. Feb 12 19:43:13.948551 env[1106]: time="2024-02-12T19:43:13.947100521Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:43:13.950102 env[1106]: time="2024-02-12T19:43:13.950036967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.957448 env[1106]: time="2024-02-12T19:43:13.957375856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:13.961881 env[1106]: time="2024-02-12T19:43:13.961783507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.964780 env[1106]: time="2024-02-12T19:43:13.964709210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:13.965055 env[1106]: time="2024-02-12T19:43:13.965019348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.966825 env[1106]: time="2024-02-12T19:43:13.966760138Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:43:13.967315 env[1106]: time="2024-02-12T19:43:13.967264746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.967753 env[1106]: time="2024-02-12T19:43:13.967712075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.968465 env[1106]: time="2024-02-12T19:43:13.968431438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:13.971325 env[1106]: time="2024-02-12T19:43:13.971264633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:13.974874 env[1106]: time="2024-02-12T19:43:13.974809255Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:43:13.976011 env[1106]: time="2024-02-12T19:43:13.975948402Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:43:13.976504 env[1106]: time="2024-02-12T19:43:13.976457857Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:43:13.986557 coreos-metadata[1082]: Feb 12 19:43:13.986 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:43:14.005034 coreos-metadata[1082]: Feb 12 19:43:14.004 INFO Fetch successful Feb 12 19:43:14.063026 unknown[1082]: wrote ssh authorized keys file for user: core Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076075054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076148991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076169142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076251084Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076276819Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076351108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076370499Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076395355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076416169Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076436093Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076455092Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076500273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076711425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:43:14.078856 env[1106]: time="2024-02-12T19:43:14.076893982Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077305882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077341488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077363250Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077457995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077477819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077495726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077588396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077608107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077636670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077659429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077679908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077702150Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077925915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077954699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.079894 env[1106]: time="2024-02-12T19:43:14.077974396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.080573 env[1106]: time="2024-02-12T19:43:14.077991682Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:43:14.080573 env[1106]: time="2024-02-12T19:43:14.078013615Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:43:14.080573 env[1106]: time="2024-02-12T19:43:14.078031206Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:43:14.080573 env[1106]: time="2024-02-12T19:43:14.078058407Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:43:14.080573 env[1106]: time="2024-02-12T19:43:14.078105181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:43:14.081445 env[1106]: time="2024-02-12T19:43:14.078451631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:43:14.081445 env[1106]: time="2024-02-12T19:43:14.078536175Z" level=info msg="Connect containerd service" Feb 12 19:43:14.081445 env[1106]: time="2024-02-12T19:43:14.078591777Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:43:14.088009 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 12 19:43:14.088052 tar[1101]: ./vlan Feb 12 19:43:14.141442 extend-filesystems[1136]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:43:14.141442 extend-filesystems[1136]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 12 19:43:14.141442 extend-filesystems[1136]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.140426306Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.140581107Z" level=info msg="Start subscribing containerd event" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.140647121Z" level=info msg="Start recovering state" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.142591615Z" level=info msg="Start event monitor" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.142643929Z" level=info msg="Start snapshots syncer" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.142663576Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.142673718Z" level=info msg="Start streaming server" Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.144209020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:43:14.167041 env[1106]: time="2024-02-12T19:43:14.144341751Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:43:14.167462 update-ssh-keys[1155]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:43:14.141675 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:43:14.167721 extend-filesystems[1087]: Resized filesystem in /dev/vda9 Feb 12 19:43:14.167721 extend-filesystems[1087]: Found vdb Feb 12 19:43:14.141998 systemd[1]: Finished extend-filesystems.service. Feb 12 19:43:14.146771 systemd[1]: Started containerd.service. Feb 12 19:43:14.160230 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 19:43:14.183599 env[1106]: time="2024-02-12T19:43:14.183533868Z" level=info msg="containerd successfully booted in 0.385927s" Feb 12 19:43:14.234060 tar[1101]: ./portmap Feb 12 19:43:14.288992 tar[1101]: ./host-local Feb 12 19:43:14.341292 tar[1101]: ./vrf Feb 12 19:43:14.395007 tar[1101]: ./bridge Feb 12 19:43:14.464652 tar[1101]: ./tuning Feb 12 19:43:14.561287 tar[1101]: ./firewall Feb 12 19:43:14.707581 tar[1101]: ./host-device Feb 12 19:43:14.826308 tar[1101]: ./sbr Feb 12 19:43:14.879811 tar[1101]: ./loopback Feb 12 19:43:14.933440 tar[1101]: ./dhcp Feb 12 19:43:15.202808 tar[1101]: ./ptp Feb 12 19:43:15.320191 tar[1101]: ./ipvlan Feb 12 19:43:15.386237 tar[1110]: linux-amd64/LICENSE Feb 12 19:43:15.386865 tar[1110]: linux-amd64/README.md Feb 12 19:43:15.395836 systemd[1]: Finished prepare-helm.service. Feb 12 19:43:15.410427 systemd[1]: Finished prepare-critools.service. Feb 12 19:43:15.426869 tar[1101]: ./bandwidth Feb 12 19:43:15.488912 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:43:15.514313 locksmithd[1149]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:43:15.653142 sshd_keygen[1118]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:43:15.685944 systemd[1]: Finished sshd-keygen.service. Feb 12 19:43:15.689277 systemd[1]: Starting issuegen.service... Feb 12 19:43:15.692242 systemd[1]: Started sshd@0-64.23.173.239:22-139.178.68.195:56330.service. Feb 12 19:43:15.704767 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:43:15.705052 systemd[1]: Finished issuegen.service. Feb 12 19:43:15.708495 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:43:15.721080 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:43:15.724142 systemd[1]: Started getty@tty1.service. Feb 12 19:43:15.727091 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:43:15.728124 systemd[1]: Reached target getty.target. Feb 12 19:43:15.728949 systemd[1]: Reached target multi-user.target. Feb 12 19:43:15.731771 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:43:15.749406 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:43:15.749688 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:43:15.750529 systemd[1]: Startup finished in 1.153s (kernel) + 8.807s (initrd) + 10.722s (userspace) = 20.682s. Feb 12 19:43:15.816950 sshd[1174]: Accepted publickey for core from 139.178.68.195 port 56330 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:43:15.820459 sshd[1174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:15.832525 systemd[1]: Created slice user-500.slice. Feb 12 19:43:15.834084 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:43:15.839322 systemd-logind[1097]: New session 1 of user core. Feb 12 19:43:15.846554 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:43:15.849434 systemd[1]: Starting user@500.service... Feb 12 19:43:15.855532 (systemd)[1184]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:15.962030 systemd[1184]: Queued start job for default target default.target. Feb 12 19:43:15.963815 systemd[1184]: Reached target paths.target. Feb 12 19:43:15.964023 systemd[1184]: Reached target sockets.target. Feb 12 19:43:15.964118 systemd[1184]: Reached target timers.target. Feb 12 19:43:15.964218 systemd[1184]: Reached target basic.target. Feb 12 19:43:15.964426 systemd[1]: Started user@500.service. Feb 12 19:43:15.965727 systemd[1]: Started session-1.scope. Feb 12 19:43:15.967972 systemd[1184]: Reached target default.target. Feb 12 19:43:15.969067 systemd[1184]: Startup finished in 104ms. Feb 12 19:43:16.042947 systemd[1]: Started sshd@1-64.23.173.239:22-139.178.68.195:37468.service. Feb 12 19:43:16.092471 sshd[1193]: Accepted publickey for core from 139.178.68.195 port 37468 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:43:16.100968 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.110620 systemd-logind[1097]: New session 2 of user core. Feb 12 19:43:16.112002 systemd[1]: Started session-2.scope. Feb 12 19:43:16.188182 sshd[1193]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:16.196956 systemd[1]: sshd@1-64.23.173.239:22-139.178.68.195:37468.service: Deactivated successfully. Feb 12 19:43:16.198178 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:43:16.199757 systemd-logind[1097]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:43:16.201847 systemd[1]: Started sshd@2-64.23.173.239:22-139.178.68.195:37472.service. Feb 12 19:43:16.204308 systemd-logind[1097]: Removed session 2. Feb 12 19:43:16.253756 sshd[1199]: Accepted publickey for core from 139.178.68.195 port 37472 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:43:16.256196 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.263780 systemd-logind[1097]: New session 3 of user core. Feb 12 19:43:16.264335 systemd[1]: Started session-3.scope. Feb 12 19:43:16.339721 sshd[1199]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:16.346577 systemd[1]: Started sshd@3-64.23.173.239:22-139.178.68.195:37480.service. Feb 12 19:43:16.347438 systemd[1]: sshd@2-64.23.173.239:22-139.178.68.195:37472.service: Deactivated successfully. Feb 12 19:43:16.348588 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:43:16.351429 systemd-logind[1097]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:43:16.352765 systemd-logind[1097]: Removed session 3. Feb 12 19:43:16.400197 sshd[1204]: Accepted publickey for core from 139.178.68.195 port 37480 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:43:16.402514 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.410659 systemd-logind[1097]: New session 4 of user core. Feb 12 19:43:16.413447 systemd[1]: Started session-4.scope. Feb 12 19:43:16.490546 sshd[1204]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:16.500507 systemd[1]: Started sshd@4-64.23.173.239:22-139.178.68.195:37484.service. Feb 12 19:43:16.501634 systemd[1]: sshd@3-64.23.173.239:22-139.178.68.195:37480.service: Deactivated successfully. Feb 12 19:43:16.503013 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:43:16.504636 systemd-logind[1097]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:43:16.508640 systemd-logind[1097]: Removed session 4. Feb 12 19:43:16.561114 sshd[1210]: Accepted publickey for core from 139.178.68.195 port 37484 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:43:16.563495 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.572602 systemd-logind[1097]: New session 5 of user core. Feb 12 19:43:16.573391 systemd[1]: Started session-5.scope. Feb 12 19:43:16.659287 sudo[1214]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:43:16.660310 sudo[1214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:17.270337 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:43:17.282128 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:43:17.283781 systemd[1]: Reached target network-online.target. Feb 12 19:43:17.286996 systemd[1]: Starting docker.service... Feb 12 19:43:17.349322 env[1230]: time="2024-02-12T19:43:17.349246464Z" level=info msg="Starting up" Feb 12 19:43:17.351970 env[1230]: time="2024-02-12T19:43:17.351927696Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:17.352177 env[1230]: time="2024-02-12T19:43:17.352154434Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:17.352283 env[1230]: time="2024-02-12T19:43:17.352261984Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:17.352356 env[1230]: time="2024-02-12T19:43:17.352341924Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:17.356335 env[1230]: time="2024-02-12T19:43:17.356292310Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:17.356571 env[1230]: time="2024-02-12T19:43:17.356549585Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:17.356720 env[1230]: time="2024-02-12T19:43:17.356690769Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:17.356868 env[1230]: time="2024-02-12T19:43:17.356849525Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:17.364141 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3201664209-merged.mount: Deactivated successfully. Feb 12 19:43:17.572856 env[1230]: time="2024-02-12T19:43:17.572682151Z" level=info msg="Loading containers: start." Feb 12 19:43:17.730795 kernel: Initializing XFRM netlink socket Feb 12 19:43:17.777077 env[1230]: time="2024-02-12T19:43:17.777018440Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:43:17.883667 systemd-networkd[1004]: docker0: Link UP Feb 12 19:43:17.905000 env[1230]: time="2024-02-12T19:43:17.904952496Z" level=info msg="Loading containers: done." Feb 12 19:43:17.921490 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2846430498-merged.mount: Deactivated successfully. Feb 12 19:43:17.932528 env[1230]: time="2024-02-12T19:43:17.932462665Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:43:17.933280 env[1230]: time="2024-02-12T19:43:17.933232630Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:43:17.933647 env[1230]: time="2024-02-12T19:43:17.933620663Z" level=info msg="Daemon has completed initialization" Feb 12 19:43:17.976953 systemd[1]: Started docker.service. Feb 12 19:43:17.986191 env[1230]: time="2024-02-12T19:43:17.986122963Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:43:18.019104 systemd[1]: Starting coreos-metadata.service... Feb 12 19:43:18.067413 coreos-metadata[1354]: Feb 12 19:43:18.067 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:43:18.081202 coreos-metadata[1354]: Feb 12 19:43:18.081 INFO Fetch successful Feb 12 19:43:18.097143 systemd[1]: Finished coreos-metadata.service. Feb 12 19:43:18.119867 systemd[1]: Reloading. Feb 12 19:43:18.266020 /usr/lib/systemd/system-generators/torcx-generator[1389]: time="2024-02-12T19:43:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:18.275056 /usr/lib/systemd/system-generators/torcx-generator[1389]: time="2024-02-12T19:43:18Z" level=info msg="torcx already run" Feb 12 19:43:18.379988 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:18.380030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:18.413425 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:18.536147 systemd[1]: Started kubelet.service. Feb 12 19:43:18.628848 kubelet[1432]: E0212 19:43:18.628740 1432 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:18.631713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:18.631891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:19.208433 env[1106]: time="2024-02-12T19:43:19.208384100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:43:19.845554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675595803.mount: Deactivated successfully. Feb 12 19:43:22.236222 env[1106]: time="2024-02-12T19:43:22.236128012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:22.239153 env[1106]: time="2024-02-12T19:43:22.239085079Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:22.244103 env[1106]: time="2024-02-12T19:43:22.244024071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:22.251917 env[1106]: time="2024-02-12T19:43:22.251791528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:22.253656 env[1106]: time="2024-02-12T19:43:22.253507511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 19:43:22.267885 env[1106]: time="2024-02-12T19:43:22.267822386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:43:24.870363 env[1106]: time="2024-02-12T19:43:24.870303230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:24.872446 env[1106]: time="2024-02-12T19:43:24.872405341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:24.884913 env[1106]: time="2024-02-12T19:43:24.884852672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:24.887565 env[1106]: time="2024-02-12T19:43:24.887522578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:24.888327 env[1106]: time="2024-02-12T19:43:24.888291866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 19:43:24.904285 env[1106]: time="2024-02-12T19:43:24.904227413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:43:26.290583 env[1106]: time="2024-02-12T19:43:26.290507038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:26.293681 env[1106]: time="2024-02-12T19:43:26.293625116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:26.299760 env[1106]: time="2024-02-12T19:43:26.299668007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:26.303123 env[1106]: time="2024-02-12T19:43:26.303054813Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 19:43:26.303698 env[1106]: time="2024-02-12T19:43:26.301686533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:26.317261 env[1106]: time="2024-02-12T19:43:26.317208052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:43:27.637399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497643408.mount: Deactivated successfully. Feb 12 19:43:28.289654 env[1106]: time="2024-02-12T19:43:28.289601002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.297387 env[1106]: time="2024-02-12T19:43:28.297313115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.301560 env[1106]: time="2024-02-12T19:43:28.301497031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.305465 env[1106]: time="2024-02-12T19:43:28.305403229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.306202 env[1106]: time="2024-02-12T19:43:28.306147021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:43:28.323330 env[1106]: time="2024-02-12T19:43:28.323264366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:43:28.729072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:28.729338 systemd[1]: Stopped kubelet.service. Feb 12 19:43:28.731874 systemd[1]: Started kubelet.service. Feb 12 19:43:28.866020 kubelet[1471]: E0212 19:43:28.865912 1471 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:28.871356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984342477.mount: Deactivated successfully. Feb 12 19:43:28.877895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:28.878113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:28.907657 env[1106]: time="2024-02-12T19:43:28.907589098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.913131 env[1106]: time="2024-02-12T19:43:28.913067128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.917517 env[1106]: time="2024-02-12T19:43:28.917440141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.922408 env[1106]: time="2024-02-12T19:43:28.922341372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.923566 env[1106]: time="2024-02-12T19:43:28.923502735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:43:28.940233 env[1106]: time="2024-02-12T19:43:28.940182826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:43:30.107050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737587883.mount: Deactivated successfully. Feb 12 19:43:35.943670 env[1106]: time="2024-02-12T19:43:35.943591259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.948148 env[1106]: time="2024-02-12T19:43:35.948065329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.951463 env[1106]: time="2024-02-12T19:43:35.951397090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.954180 env[1106]: time="2024-02-12T19:43:35.954117368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:35.955980 env[1106]: time="2024-02-12T19:43:35.955907268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 19:43:35.976875 env[1106]: time="2024-02-12T19:43:35.976778985Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:43:36.563882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437709764.mount: Deactivated successfully. Feb 12 19:43:37.617195 env[1106]: time="2024-02-12T19:43:37.617138285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:37.621034 env[1106]: time="2024-02-12T19:43:37.620981240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:37.625544 env[1106]: time="2024-02-12T19:43:37.625379713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:37.629599 env[1106]: time="2024-02-12T19:43:37.629460266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:37.631959 env[1106]: time="2024-02-12T19:43:37.630189392Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 19:43:38.979095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:43:38.979338 systemd[1]: Stopped kubelet.service. Feb 12 19:43:38.986644 systemd[1]: Started kubelet.service. Feb 12 19:43:39.063925 kubelet[1545]: E0212 19:43:39.063864 1545 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:39.066917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:39.067110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:42.168093 systemd[1]: Stopped kubelet.service. Feb 12 19:43:42.193656 systemd[1]: Reloading. Feb 12 19:43:42.281977 /usr/lib/systemd/system-generators/torcx-generator[1574]: time="2024-02-12T19:43:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:42.282022 /usr/lib/systemd/system-generators/torcx-generator[1574]: time="2024-02-12T19:43:42Z" level=info msg="torcx already run" Feb 12 19:43:42.408116 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:42.408424 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:42.442188 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:42.568307 systemd[1]: Started kubelet.service. Feb 12 19:43:42.658498 kubelet[1622]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:42.659130 kubelet[1622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:42.659439 kubelet[1622]: I0212 19:43:42.659369 1622 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:42.662683 kubelet[1622]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:42.663060 kubelet[1622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:43.250684 kubelet[1622]: I0212 19:43:43.250637 1622 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:43:43.251191 kubelet[1622]: I0212 19:43:43.251164 1622 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:43.251667 kubelet[1622]: I0212 19:43:43.251640 1622 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:43:43.257205 kubelet[1622]: E0212 19:43:43.257161 1622 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.173.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.257360 kubelet[1622]: I0212 19:43:43.257223 1622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:43.259959 kubelet[1622]: I0212 19:43:43.259924 1622 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:43.260602 kubelet[1622]: I0212 19:43:43.260577 1622 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:43.261190 kubelet[1622]: I0212 19:43:43.261144 1622 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:43:43.261370 kubelet[1622]: I0212 19:43:43.261218 1622 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:43:43.261370 kubelet[1622]: I0212 19:43:43.261242 1622 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:43:43.261459 kubelet[1622]: I0212 19:43:43.261422 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:43.270625 kubelet[1622]: I0212 19:43:43.270065 1622 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:43:43.270625 kubelet[1622]: I0212 19:43:43.270635 1622 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:43.271142 kubelet[1622]: I0212 19:43:43.270696 1622 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:43:43.271838 kubelet[1622]: I0212 19:43:43.271796 1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:43.273363 kubelet[1622]: W0212 19:43:43.273271 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.173.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-4-a1ae76f648&limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.273655 kubelet[1622]: E0212 19:43:43.273629 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.173.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-4-a1ae76f648&limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.274460 kubelet[1622]: I0212 19:43:43.274431 1622 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:43.275379 kubelet[1622]: W0212 19:43:43.275346 1622 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:43:43.276209 kubelet[1622]: I0212 19:43:43.276180 1622 server.go:1186] "Started kubelet" Feb 12 19:43:43.276587 kubelet[1622]: W0212 19:43:43.276529 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.173.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.276785 kubelet[1622]: E0212 19:43:43.276766 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.173.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.279242 kubelet[1622]: E0212 19:43:43.278620 1622 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f3d8028bf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 276140735, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 276140735, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://64.23.173.239:6443/api/v1/namespaces/default/events": dial tcp 64.23.173.239:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:43.279532 kubelet[1622]: I0212 19:43:43.279413 1622 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:43.280507 kubelet[1622]: I0212 19:43:43.280473 1622 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:43:43.282799 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:43:43.283217 kubelet[1622]: I0212 19:43:43.283176 1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:43.283771 kubelet[1622]: E0212 19:43:43.283704 1622 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:43.283771 kubelet[1622]: E0212 19:43:43.283776 1622 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:43.288441 kubelet[1622]: I0212 19:43:43.287165 1622 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:43:43.288441 kubelet[1622]: I0212 19:43:43.287278 1622 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:43.288441 kubelet[1622]: W0212 19:43:43.288221 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.173.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.288441 kubelet[1622]: E0212 19:43:43.288311 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.173.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.290318 kubelet[1622]: E0212 19:43:43.290250 1622 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://64.23.173.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-4-a1ae76f648?timeout=10s": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.328287 kubelet[1622]: I0212 19:43:43.328233 1622 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:43.328287 kubelet[1622]: I0212 19:43:43.328265 1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:43.328287 kubelet[1622]: I0212 19:43:43.328285 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:43.332259 kubelet[1622]: I0212 19:43:43.332220 1622 policy_none.go:49] "None policy: Start" Feb 12 19:43:43.333502 kubelet[1622]: I0212 19:43:43.333466 1622 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:43.333502 kubelet[1622]: I0212 19:43:43.333500 1622 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:43.342354 systemd[1]: Created slice kubepods.slice. Feb 12 19:43:43.350029 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:43:43.354635 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:43:43.362130 kubelet[1622]: I0212 19:43:43.362089 1622 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:43.362429 kubelet[1622]: I0212 19:43:43.362405 1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:43.364835 kubelet[1622]: E0212 19:43:43.363928 1622 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-4-a1ae76f648\" not found" Feb 12 19:43:43.384696 kubelet[1622]: I0212 19:43:43.384613 1622 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:43:43.390443 kubelet[1622]: I0212 19:43:43.390397 1622 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.391620 kubelet[1622]: E0212 19:43:43.391587 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.173.239:6443/api/v1/nodes\": dial tcp 64.23.173.239:6443: connect: connection refused" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.424716 kubelet[1622]: I0212 19:43:43.424640 1622 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:43:43.424716 kubelet[1622]: I0212 19:43:43.424701 1622 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:43:43.424977 kubelet[1622]: I0212 19:43:43.424941 1622 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:43:43.425026 kubelet[1622]: E0212 19:43:43.425017 1622 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:43:43.425965 kubelet[1622]: W0212 19:43:43.425771 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.173.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.425965 kubelet[1622]: E0212 19:43:43.425852 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.173.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.491988 kubelet[1622]: E0212 19:43:43.491922 1622 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://64.23.173.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-4-a1ae76f648?timeout=10s": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:43.525919 kubelet[1622]: I0212 19:43:43.525185 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:43.527080 kubelet[1622]: I0212 19:43:43.527039 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:43.529564 kubelet[1622]: I0212 19:43:43.529482 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:43.531576 kubelet[1622]: I0212 19:43:43.531544 1622 status_manager.go:698] "Failed to get status for pod" podUID=a095701792fa484e5be26d39425111a5 pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:43.532535 kubelet[1622]: I0212 19:43:43.532246 1622 status_manager.go:698] "Failed to get status for pod" podUID=1fe6c48ad76431c100031d2b46ad5f62 pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:43.537421 kubelet[1622]: I0212 19:43:43.537374 1622 status_manager.go:698] "Failed to get status for pod" podUID=6ccbd60b60bb6bf0e9ef1622e1f82716 pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:43.540279 systemd[1]: Created slice kubepods-burstable-poda095701792fa484e5be26d39425111a5.slice. Feb 12 19:43:43.550570 systemd[1]: Created slice kubepods-burstable-pod1fe6c48ad76431c100031d2b46ad5f62.slice. Feb 12 19:43:43.556190 systemd[1]: Created slice kubepods-burstable-pod6ccbd60b60bb6bf0e9ef1622e1f82716.slice. Feb 12 19:43:43.590660 kubelet[1622]: I0212 19:43:43.590608 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.591337 kubelet[1622]: I0212 19:43:43.591313 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.591577 kubelet[1622]: I0212 19:43:43.591562 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.591708 kubelet[1622]: I0212 19:43:43.591698 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.595887 kubelet[1622]: I0212 19:43:43.595813 1622 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.596369 kubelet[1622]: I0212 19:43:43.596197 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.596624 kubelet[1622]: I0212 19:43:43.596604 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ccbd60b60bb6bf0e9ef1622e1f82716-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-4-a1ae76f648\" (UID: \"6ccbd60b60bb6bf0e9ef1622e1f82716\") " pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.596917 kubelet[1622]: I0212 19:43:43.596886 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.597118 kubelet[1622]: I0212 19:43:43.597103 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.597242 kubelet[1622]: I0212 19:43:43.597231 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.597400 kubelet[1622]: E0212 19:43:43.596442 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.173.239:6443/api/v1/nodes\": dial tcp 64.23.173.239:6443: connect: connection refused" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:43.849390 kubelet[1622]: E0212 19:43:43.849221 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:43.851546 env[1106]: time="2024-02-12T19:43:43.851473645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-4-a1ae76f648,Uid:a095701792fa484e5be26d39425111a5,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:43.853777 kubelet[1622]: E0212 19:43:43.853721 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:43.854942 env[1106]: time="2024-02-12T19:43:43.854524080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-4-a1ae76f648,Uid:1fe6c48ad76431c100031d2b46ad5f62,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:43.863506 kubelet[1622]: E0212 19:43:43.863474 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:43.864890 env[1106]: time="2024-02-12T19:43:43.864302497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-4-a1ae76f648,Uid:6ccbd60b60bb6bf0e9ef1622e1f82716,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:43.893857 kubelet[1622]: E0212 19:43:43.893788 1622 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://64.23.173.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-4-a1ae76f648?timeout=10s": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.008973 kubelet[1622]: I0212 19:43:44.008935 1622 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:44.009793 kubelet[1622]: E0212 19:43:44.009737 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.173.239:6443/api/v1/nodes\": dial tcp 64.23.173.239:6443: connect: connection refused" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:44.220858 kubelet[1622]: W0212 19:43:44.220663 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://64.23.173.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.221116 kubelet[1622]: E0212 19:43:44.221092 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.173.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.441781 kubelet[1622]: W0212 19:43:44.441622 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://64.23.173.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.441781 kubelet[1622]: E0212 19:43:44.441729 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.173.239:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.514269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547264208.mount: Deactivated successfully. Feb 12 19:43:44.525462 env[1106]: time="2024-02-12T19:43:44.525394115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.534449 env[1106]: time="2024-02-12T19:43:44.534388966Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.536627 env[1106]: time="2024-02-12T19:43:44.536574721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.541064 env[1106]: time="2024-02-12T19:43:44.541007058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.547892 env[1106]: time="2024-02-12T19:43:44.546542557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.548774 env[1106]: time="2024-02-12T19:43:44.548672829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.549568 env[1106]: time="2024-02-12T19:43:44.549514011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.550636 env[1106]: time="2024-02-12T19:43:44.550592007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.555452 env[1106]: time="2024-02-12T19:43:44.555378626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.557620 env[1106]: time="2024-02-12T19:43:44.557551832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.559844 env[1106]: time="2024-02-12T19:43:44.559788316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.561242 env[1106]: time="2024-02-12T19:43:44.561184509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.568142 kubelet[1622]: W0212 19:43:44.567610 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://64.23.173.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-4-a1ae76f648&limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.568142 kubelet[1622]: E0212 19:43:44.567698 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.173.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-4-a1ae76f648&limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.621622 env[1106]: time="2024-02-12T19:43:44.615918050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:44.621878 env[1106]: time="2024-02-12T19:43:44.621605282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:44.621878 env[1106]: time="2024-02-12T19:43:44.621617889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:44.622057 env[1106]: time="2024-02-12T19:43:44.621922548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39b96dda6170c33fd8fceccc8480989c7855886e985930a41d26d3fc572c4c84 pid=1698 runtime=io.containerd.runc.v2 Feb 12 19:43:44.639789 env[1106]: time="2024-02-12T19:43:44.639677011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:44.639965 env[1106]: time="2024-02-12T19:43:44.639806610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:44.639965 env[1106]: time="2024-02-12T19:43:44.639840894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:44.640154 env[1106]: time="2024-02-12T19:43:44.640105811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c62b9a8d919008663dc65abde5dbda9f595f10d1b4297747ce424a57f3aa22a pid=1732 runtime=io.containerd.runc.v2 Feb 12 19:43:44.643663 env[1106]: time="2024-02-12T19:43:44.643379831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:44.643663 env[1106]: time="2024-02-12T19:43:44.643433250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:44.643663 env[1106]: time="2024-02-12T19:43:44.643451022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:44.643941 env[1106]: time="2024-02-12T19:43:44.643706668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a052371ebe0b5fcd82b5d7e650e62e265ea83245572f323ddaac442054f30535 pid=1721 runtime=io.containerd.runc.v2 Feb 12 19:43:44.657942 systemd[1]: Started cri-containerd-39b96dda6170c33fd8fceccc8480989c7855886e985930a41d26d3fc572c4c84.scope. Feb 12 19:43:44.690315 systemd[1]: Started cri-containerd-a052371ebe0b5fcd82b5d7e650e62e265ea83245572f323ddaac442054f30535.scope. Feb 12 19:43:44.695581 kubelet[1622]: E0212 19:43:44.695523 1622 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://64.23.173.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-4-a1ae76f648?timeout=10s": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.718964 systemd[1]: Started cri-containerd-3c62b9a8d919008663dc65abde5dbda9f595f10d1b4297747ce424a57f3aa22a.scope. Feb 12 19:43:44.729825 kubelet[1622]: W0212 19:43:44.729727 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://64.23.173.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.729825 kubelet[1622]: E0212 19:43:44.729830 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.173.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:44.789643 env[1106]: time="2024-02-12T19:43:44.789486362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-4-a1ae76f648,Uid:1fe6c48ad76431c100031d2b46ad5f62,Namespace:kube-system,Attempt:0,} returns sandbox id \"a052371ebe0b5fcd82b5d7e650e62e265ea83245572f323ddaac442054f30535\"" Feb 12 19:43:44.793164 kubelet[1622]: E0212 19:43:44.793124 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:44.797211 env[1106]: time="2024-02-12T19:43:44.797156968Z" level=info msg="CreateContainer within sandbox \"a052371ebe0b5fcd82b5d7e650e62e265ea83245572f323ddaac442054f30535\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:43:44.816069 kubelet[1622]: I0212 19:43:44.815467 1622 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:44.816069 kubelet[1622]: E0212 19:43:44.816028 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://64.23.173.239:6443/api/v1/nodes\": dial tcp 64.23.173.239:6443: connect: connection refused" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:44.828557 env[1106]: time="2024-02-12T19:43:44.828496436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-4-a1ae76f648,Uid:a095701792fa484e5be26d39425111a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"39b96dda6170c33fd8fceccc8480989c7855886e985930a41d26d3fc572c4c84\"" Feb 12 19:43:44.830239 kubelet[1622]: E0212 19:43:44.829953 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:44.834535 env[1106]: time="2024-02-12T19:43:44.834473032Z" level=info msg="CreateContainer within sandbox \"39b96dda6170c33fd8fceccc8480989c7855886e985930a41d26d3fc572c4c84\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:43:44.867414 env[1106]: time="2024-02-12T19:43:44.867112728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-4-a1ae76f648,Uid:6ccbd60b60bb6bf0e9ef1622e1f82716,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c62b9a8d919008663dc65abde5dbda9f595f10d1b4297747ce424a57f3aa22a\"" Feb 12 19:43:44.868662 kubelet[1622]: E0212 19:43:44.868552 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:44.872225 env[1106]: time="2024-02-12T19:43:44.872169293Z" level=info msg="CreateContainer within sandbox \"3c62b9a8d919008663dc65abde5dbda9f595f10d1b4297747ce424a57f3aa22a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:43:44.896728 env[1106]: time="2024-02-12T19:43:44.896667004Z" level=info msg="CreateContainer within sandbox \"a052371ebe0b5fcd82b5d7e650e62e265ea83245572f323ddaac442054f30535\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"953319b52b79a1f597971060aeb8ea384d9fb08f50f3fb5f631243d122376693\"" Feb 12 19:43:44.897949 env[1106]: time="2024-02-12T19:43:44.897833663Z" level=info msg="StartContainer for \"953319b52b79a1f597971060aeb8ea384d9fb08f50f3fb5f631243d122376693\"" Feb 12 19:43:44.904901 env[1106]: time="2024-02-12T19:43:44.904830710Z" level=info msg="CreateContainer within sandbox \"39b96dda6170c33fd8fceccc8480989c7855886e985930a41d26d3fc572c4c84\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77898ef52ced6ba74e27e14768267ea37df476fc36de139814eebd65aca80e18\"" Feb 12 19:43:44.905776 env[1106]: time="2024-02-12T19:43:44.905725654Z" level=info msg="StartContainer for \"77898ef52ced6ba74e27e14768267ea37df476fc36de139814eebd65aca80e18\"" Feb 12 19:43:44.931384 systemd[1]: Started cri-containerd-953319b52b79a1f597971060aeb8ea384d9fb08f50f3fb5f631243d122376693.scope. Feb 12 19:43:44.944142 env[1106]: time="2024-02-12T19:43:44.944072144Z" level=info msg="CreateContainer within sandbox \"3c62b9a8d919008663dc65abde5dbda9f595f10d1b4297747ce424a57f3aa22a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3652d37d512d33d3603f07ec40a91c9f2656035fc6225b3e2e5c411e0109b67c\"" Feb 12 19:43:44.944957 env[1106]: time="2024-02-12T19:43:44.944909899Z" level=info msg="StartContainer for \"3652d37d512d33d3603f07ec40a91c9f2656035fc6225b3e2e5c411e0109b67c\"" Feb 12 19:43:44.958021 systemd[1]: Started cri-containerd-77898ef52ced6ba74e27e14768267ea37df476fc36de139814eebd65aca80e18.scope. Feb 12 19:43:44.993220 systemd[1]: Started cri-containerd-3652d37d512d33d3603f07ec40a91c9f2656035fc6225b3e2e5c411e0109b67c.scope. Feb 12 19:43:45.065200 env[1106]: time="2024-02-12T19:43:45.065060823Z" level=info msg="StartContainer for \"953319b52b79a1f597971060aeb8ea384d9fb08f50f3fb5f631243d122376693\" returns successfully" Feb 12 19:43:45.102032 env[1106]: time="2024-02-12T19:43:45.101978984Z" level=info msg="StartContainer for \"77898ef52ced6ba74e27e14768267ea37df476fc36de139814eebd65aca80e18\" returns successfully" Feb 12 19:43:45.124079 env[1106]: time="2024-02-12T19:43:45.124027609Z" level=info msg="StartContainer for \"3652d37d512d33d3603f07ec40a91c9f2656035fc6225b3e2e5c411e0109b67c\" returns successfully" Feb 12 19:43:45.433622 kubelet[1622]: E0212 19:43:45.433477 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:45.434041 kubelet[1622]: I0212 19:43:45.434014 1622 status_manager.go:698] "Failed to get status for pod" podUID=6ccbd60b60bb6bf0e9ef1622e1f82716 pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:45.436648 kubelet[1622]: E0212 19:43:45.436620 1622 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.173.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.173.239:6443: connect: connection refused Feb 12 19:43:45.438333 kubelet[1622]: E0212 19:43:45.438008 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:45.438610 kubelet[1622]: I0212 19:43:45.438584 1622 status_manager.go:698] "Failed to get status for pod" podUID=1fe6c48ad76431c100031d2b46ad5f62 pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:45.441155 kubelet[1622]: E0212 19:43:45.441114 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:45.470987 kubelet[1622]: I0212 19:43:45.470938 1622 status_manager.go:698] "Failed to get status for pod" podUID=a095701792fa484e5be26d39425111a5 pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" err="Get \"https://64.23.173.239:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-4-a1ae76f648\": dial tcp 64.23.173.239:6443: connect: connection refused" Feb 12 19:43:46.417359 kubelet[1622]: I0212 19:43:46.417316 1622 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:46.443262 kubelet[1622]: E0212 19:43:46.443222 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:46.443992 kubelet[1622]: E0212 19:43:46.443963 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:46.444566 kubelet[1622]: E0212 19:43:46.444537 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:47.444583 kubelet[1622]: E0212 19:43:47.444551 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:47.606239 kubelet[1622]: E0212 19:43:47.606183 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:48.764971 kubelet[1622]: E0212 19:43:48.764930 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:49.565321 kubelet[1622]: E0212 19:43:49.565255 1622 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-4-a1ae76f648\" not found" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:49.593097 kubelet[1622]: I0212 19:43:49.593049 1622 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:49.649447 kubelet[1622]: E0212 19:43:49.649299 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f3d8028bf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 276140735, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 276140735, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:49.731959 kubelet[1622]: E0212 19:43:49.731836 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f3df46445", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 283758149, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 283758149, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:49.797086 kubelet[1622]: E0212 19:43:49.796938 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f61a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327469988, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327469988, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:49.854859 kubelet[1622]: E0212 19:43:49.854659 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f7a1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327476250, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327476250, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:49.919211 kubelet[1622]: E0212 19:43:49.918913 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f8955", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327480149, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327480149, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:49.976324 kubelet[1622]: E0212 19:43:49.976148 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f42e1629d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 366398621, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 366398621, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:50.045017 kubelet[1622]: E0212 19:43:50.044892 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f61a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327469988, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 390335179, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:50.105883 kubelet[1622]: E0212 19:43:50.105629 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f7a1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327476250, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 390344621, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:50.166771 kubelet[1622]: E0212 19:43:50.166553 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f8955", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327480149, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 390351116, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:50.278476 kubelet[1622]: I0212 19:43:50.278395 1622 apiserver.go:52] "Watching apiserver" Feb 12 19:43:50.287754 kubelet[1622]: I0212 19:43:50.287674 1622 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:50.308891 kubelet[1622]: E0212 19:43:50.308681 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f61a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327469988, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 526744486, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:50.359352 kubelet[1622]: I0212 19:43:50.359205 1622 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:43:50.715497 kubelet[1622]: E0212 19:43:50.715220 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f7a1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327476250, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 526751806, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:51.115638 kubelet[1622]: E0212 19:43:51.115501 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f8955", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327480149, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 526755335, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:51.506008 kubelet[1622]: E0212 19:43:51.505780 1622 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-4-a1ae76f648.17b3350f408f61a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-4-a1ae76f648", UID:"ci-3510.3.2-4-a1ae76f648", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-4-a1ae76f648 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-4-a1ae76f648"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 327469988, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 528955894, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:43:53.218571 kubelet[1622]: E0212 19:43:53.218523 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:53.360045 systemd[1]: Reloading. Feb 12 19:43:53.466033 kubelet[1622]: E0212 19:43:53.465313 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:53.613727 /usr/lib/systemd/system-generators/torcx-generator[1944]: time="2024-02-12T19:43:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:53.646557 /usr/lib/systemd/system-generators/torcx-generator[1944]: time="2024-02-12T19:43:53Z" level=info msg="torcx already run" Feb 12 19:43:53.681223 kubelet[1622]: I0212 19:43:53.681172 1622 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" podStartSLOduration=0.68109181 pod.CreationTimestamp="2024-02-12 19:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:53.676940362 +0000 UTC m=+11.102423434" watchObservedRunningTime="2024-02-12 19:43:53.68109181 +0000 UTC m=+11.106574868" Feb 12 19:43:53.830652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:53.830678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:53.876574 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:54.077847 systemd[1]: Stopping kubelet.service... Feb 12 19:43:54.091758 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:43:54.092146 systemd[1]: Stopped kubelet.service. Feb 12 19:43:54.092307 systemd[1]: kubelet.service: Consumed 1.302s CPU time. Feb 12 19:43:54.097151 systemd[1]: Started kubelet.service. Feb 12 19:43:54.242355 kubelet[1990]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:54.242355 kubelet[1990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:54.242982 kubelet[1990]: I0212 19:43:54.242500 1990 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:54.245468 kubelet[1990]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:54.245468 kubelet[1990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:54.270777 kubelet[1990]: I0212 19:43:54.267717 1990 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:43:54.270777 kubelet[1990]: I0212 19:43:54.267778 1990 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:54.270777 kubelet[1990]: I0212 19:43:54.268210 1990 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:43:54.270777 kubelet[1990]: I0212 19:43:54.270548 1990 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:43:54.277718 kubelet[1990]: I0212 19:43:54.277657 1990 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:54.279881 kubelet[1990]: I0212 19:43:54.279835 1990 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:54.280279 kubelet[1990]: I0212 19:43:54.280236 1990 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:54.280437 kubelet[1990]: I0212 19:43:54.280381 1990 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:43:54.280437 kubelet[1990]: I0212 19:43:54.280419 1990 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:43:54.280437 kubelet[1990]: I0212 19:43:54.280439 1990 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:43:54.280691 kubelet[1990]: I0212 19:43:54.280502 1990 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:54.286695 kubelet[1990]: I0212 19:43:54.286628 1990 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:43:54.286695 kubelet[1990]: I0212 19:43:54.286674 1990 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:54.286983 kubelet[1990]: I0212 19:43:54.286745 1990 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:43:54.286983 kubelet[1990]: I0212 19:43:54.286769 1990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:54.290412 kubelet[1990]: I0212 19:43:54.290363 1990 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:54.291385 kubelet[1990]: I0212 19:43:54.291157 1990 server.go:1186] "Started kubelet" Feb 12 19:43:54.300963 kubelet[1990]: I0212 19:43:54.300914 1990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:54.314042 kubelet[1990]: I0212 19:43:54.313981 1990 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:54.315060 kubelet[1990]: I0212 19:43:54.315012 1990 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:43:54.319970 kubelet[1990]: E0212 19:43:54.319890 1990 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:54.320214 kubelet[1990]: E0212 19:43:54.320199 1990 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:54.321196 kubelet[1990]: I0212 19:43:54.321158 1990 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:43:54.321943 kubelet[1990]: I0212 19:43:54.321898 1990 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:54.336135 sudo[2004]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:43:54.336417 sudo[2004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:43:54.446524 kubelet[1990]: I0212 19:43:54.446466 1990 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.490592 kubelet[1990]: I0212 19:43:54.490547 1990 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.491212 kubelet[1990]: I0212 19:43:54.491186 1990 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.585435 kubelet[1990]: I0212 19:43:54.584907 1990 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:43:54.608071 kubelet[1990]: I0212 19:43:54.608030 1990 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:54.608408 kubelet[1990]: I0212 19:43:54.608387 1990 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:54.608723 kubelet[1990]: I0212 19:43:54.608615 1990 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:54.609626 kubelet[1990]: I0212 19:43:54.609598 1990 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:43:54.609928 kubelet[1990]: I0212 19:43:54.609908 1990 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:43:54.610068 kubelet[1990]: I0212 19:43:54.610051 1990 policy_none.go:49] "None policy: Start" Feb 12 19:43:54.612085 kubelet[1990]: I0212 19:43:54.612041 1990 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:54.612332 kubelet[1990]: I0212 19:43:54.612313 1990 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:54.613356 kubelet[1990]: I0212 19:43:54.613322 1990 state_mem.go:75] "Updated machine memory state" Feb 12 19:43:54.628362 kubelet[1990]: I0212 19:43:54.628318 1990 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:54.630635 kubelet[1990]: I0212 19:43:54.629779 1990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:54.758587 kubelet[1990]: I0212 19:43:54.758524 1990 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:43:54.764502 kubelet[1990]: I0212 19:43:54.763685 1990 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:43:54.764502 kubelet[1990]: I0212 19:43:54.764177 1990 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:43:54.764502 kubelet[1990]: E0212 19:43:54.764297 1990 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:43:54.864824 kubelet[1990]: I0212 19:43:54.864649 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.864824 kubelet[1990]: I0212 19:43:54.864821 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.865077 kubelet[1990]: I0212 19:43:54.864878 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:54.913261 kubelet[1990]: E0212 19:43:54.913212 1990 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928186 kubelet[1990]: I0212 19:43:54.928132 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928397 kubelet[1990]: I0212 19:43:54.928220 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928397 kubelet[1990]: I0212 19:43:54.928270 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928397 kubelet[1990]: I0212 19:43:54.928306 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928589 kubelet[1990]: I0212 19:43:54.928485 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ccbd60b60bb6bf0e9ef1622e1f82716-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-4-a1ae76f648\" (UID: \"6ccbd60b60bb6bf0e9ef1622e1f82716\") " pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928635 kubelet[1990]: I0212 19:43:54.928580 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928787 kubelet[1990]: I0212 19:43:54.928726 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a095701792fa484e5be26d39425111a5-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" (UID: \"a095701792fa484e5be26d39425111a5\") " pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928888 kubelet[1990]: I0212 19:43:54.928819 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:54.928942 kubelet[1990]: I0212 19:43:54.928891 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe6c48ad76431c100031d2b46ad5f62-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" (UID: \"1fe6c48ad76431c100031d2b46ad5f62\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:55.185711 kubelet[1990]: E0212 19:43:55.185510 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:55.189239 kubelet[1990]: E0212 19:43:55.189195 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:55.215638 kubelet[1990]: E0212 19:43:55.215589 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:55.303855 kubelet[1990]: I0212 19:43:55.303784 1990 apiserver.go:52] "Watching apiserver" Feb 12 19:43:55.322558 kubelet[1990]: I0212 19:43:55.322476 1990 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:55.333699 kubelet[1990]: I0212 19:43:55.333632 1990 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:43:55.379553 sudo[2004]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:55.933135 kubelet[1990]: E0212 19:43:55.932984 1990 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-4-a1ae76f648\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:55.933825 kubelet[1990]: E0212 19:43:55.933724 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.302611 kubelet[1990]: E0212 19:43:56.302514 1990 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-4-a1ae76f648\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:56.304048 kubelet[1990]: E0212 19:43:56.303905 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.501529 kubelet[1990]: E0212 19:43:56.501482 1990 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-4-a1ae76f648\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" Feb 12 19:43:56.502411 kubelet[1990]: E0212 19:43:56.502366 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.806611 kubelet[1990]: E0212 19:43:56.806562 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.816233 kubelet[1990]: E0212 19:43:56.816194 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.818117 kubelet[1990]: E0212 19:43:56.818083 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:56.928795 kubelet[1990]: I0212 19:43:56.928748 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-4-a1ae76f648" podStartSLOduration=2.9280934739999998 pod.CreationTimestamp="2024-02-12 19:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:55.701512524 +0000 UTC m=+1.580185300" watchObservedRunningTime="2024-02-12 19:43:56.928093474 +0000 UTC m=+2.806766251" Feb 12 19:43:57.303115 kubelet[1990]: I0212 19:43:57.303061 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-4-a1ae76f648" podStartSLOduration=3.302994302 pod.CreationTimestamp="2024-02-12 19:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:57.300862675 +0000 UTC m=+3.179535451" watchObservedRunningTime="2024-02-12 19:43:57.302994302 +0000 UTC m=+3.181667079" Feb 12 19:43:57.809581 kubelet[1990]: E0212 19:43:57.809536 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:43:59.071978 update_engine[1098]: I0212 19:43:59.071885 1098 update_attempter.cc:509] Updating boot flags... Feb 12 19:43:59.083122 sudo[1214]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:59.100902 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:59.109805 systemd[1]: sshd@4-64.23.173.239:22-139.178.68.195:37484.service: Deactivated successfully. Feb 12 19:43:59.111437 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:43:59.111706 systemd[1]: session-5.scope: Consumed 8.408s CPU time. Feb 12 19:43:59.112656 systemd-logind[1097]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:43:59.114423 systemd-logind[1097]: Removed session 5. Feb 12 19:44:02.441966 kubelet[1990]: E0212 19:44:02.441912 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:02.741699 kubelet[1990]: E0212 19:44:02.741656 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:02.826153 kubelet[1990]: E0212 19:44:02.826105 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:02.826951 kubelet[1990]: E0212 19:44:02.826917 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:03.829408 kubelet[1990]: E0212 19:44:03.829367 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:05.495272 kubelet[1990]: E0212 19:44:05.495232 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:06.889070 kubelet[1990]: I0212 19:44:06.889023 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:06.896289 systemd[1]: Created slice kubepods-besteffort-pod2915b63c_0548_43ad_9bb0_bea14f95fef9.slice. Feb 12 19:44:06.910066 kubelet[1990]: W0212 19:44:06.910022 1990 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:44:06.910334 kubelet[1990]: E0212 19:44:06.910088 1990 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:44:06.910636 kubelet[1990]: W0212 19:44:06.910610 1990 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:44:06.910636 kubelet[1990]: E0212 19:44:06.910641 1990 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:44:06.917052 kubelet[1990]: I0212 19:44:06.917013 1990 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:44:06.918141 env[1106]: time="2024-02-12T19:44:06.918083445Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:44:06.918639 kubelet[1990]: I0212 19:44:06.918580 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:06.918936 kubelet[1990]: I0212 19:44:06.918909 1990 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:44:06.926805 systemd[1]: Created slice kubepods-burstable-podfdb0c430_20af_475d_8ff7_b47df0e68ff4.slice. Feb 12 19:44:06.943996 kubelet[1990]: I0212 19:44:06.943916 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-bpf-maps\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.943996 kubelet[1990]: I0212 19:44:06.943974 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-cgroup\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944210 kubelet[1990]: I0212 19:44:06.944013 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-net\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944210 kubelet[1990]: I0212 19:44:06.944032 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hubble-tls\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944210 kubelet[1990]: I0212 19:44:06.944075 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5df\" (UniqueName: \"kubernetes.io/projected/2915b63c-0548-43ad-9bb0-bea14f95fef9-kube-api-access-6m5df\") pod \"kube-proxy-tqv7n\" (UID: \"2915b63c-0548-43ad-9bb0-bea14f95fef9\") " pod="kube-system/kube-proxy-tqv7n" Feb 12 19:44:06.944210 kubelet[1990]: I0212 19:44:06.944102 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-config-path\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944210 kubelet[1990]: I0212 19:44:06.944148 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv97b\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-kube-api-access-mv97b\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944194 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hostproc\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944244 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2915b63c-0548-43ad-9bb0-bea14f95fef9-lib-modules\") pod \"kube-proxy-tqv7n\" (UID: \"2915b63c-0548-43ad-9bb0-bea14f95fef9\") " pod="kube-system/kube-proxy-tqv7n" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944271 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-etc-cni-netd\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944307 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2915b63c-0548-43ad-9bb0-bea14f95fef9-xtables-lock\") pod \"kube-proxy-tqv7n\" (UID: \"2915b63c-0548-43ad-9bb0-bea14f95fef9\") " pod="kube-system/kube-proxy-tqv7n" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944325 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-xtables-lock\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944356 kubelet[1990]: I0212 19:44:06.944343 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-kernel\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944573 kubelet[1990]: I0212 19:44:06.944375 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2915b63c-0548-43ad-9bb0-bea14f95fef9-kube-proxy\") pod \"kube-proxy-tqv7n\" (UID: \"2915b63c-0548-43ad-9bb0-bea14f95fef9\") " pod="kube-system/kube-proxy-tqv7n" Feb 12 19:44:06.944573 kubelet[1990]: I0212 19:44:06.944399 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-run\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944573 kubelet[1990]: I0212 19:44:06.944418 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cni-path\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944573 kubelet[1990]: I0212 19:44:06.944435 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-lib-modules\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:06.944573 kubelet[1990]: I0212 19:44:06.944471 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdb0c430-20af-475d-8ff7-b47df0e68ff4-clustermesh-secrets\") pod \"cilium-md7h8\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " pod="kube-system/cilium-md7h8" Feb 12 19:44:07.079678 kubelet[1990]: I0212 19:44:07.079617 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:07.086315 systemd[1]: Created slice kubepods-besteffort-pod8e19b2fb_6e86_46da_8e73_ec2c727ab706.slice. Feb 12 19:44:07.146753 kubelet[1990]: I0212 19:44:07.146576 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e19b2fb-6e86-46da-8e73-ec2c727ab706-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-mnzdv\" (UID: \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\") " pod="kube-system/cilium-operator-f59cbd8c6-mnzdv" Feb 12 19:44:07.146753 kubelet[1990]: I0212 19:44:07.146663 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w8l8\" (UniqueName: \"kubernetes.io/projected/8e19b2fb-6e86-46da-8e73-ec2c727ab706-kube-api-access-7w8l8\") pod \"cilium-operator-f59cbd8c6-mnzdv\" (UID: \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\") " pod="kube-system/cilium-operator-f59cbd8c6-mnzdv" Feb 12 19:44:07.990250 kubelet[1990]: E0212 19:44:07.990172 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:07.993763 env[1106]: time="2024-02-12T19:44:07.993698197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mnzdv,Uid:8e19b2fb-6e86-46da-8e73-ec2c727ab706,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:08.040408 env[1106]: time="2024-02-12T19:44:08.040200145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:08.040408 env[1106]: time="2024-02-12T19:44:08.040350213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:08.040858 env[1106]: time="2024-02-12T19:44:08.040804545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:08.041315 env[1106]: time="2024-02-12T19:44:08.041260756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b pid=2112 runtime=io.containerd.runc.v2 Feb 12 19:44:08.064034 systemd[1]: Started cri-containerd-b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b.scope. Feb 12 19:44:08.073294 kubelet[1990]: E0212 19:44:08.064855 1990 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:08.073294 kubelet[1990]: E0212 19:44:08.064975 1990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2915b63c-0548-43ad-9bb0-bea14f95fef9-kube-proxy podName:2915b63c-0548-43ad-9bb0-bea14f95fef9 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:08.564940339 +0000 UTC m=+14.443613110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2915b63c-0548-43ad-9bb0-bea14f95fef9-kube-proxy") pod "kube-proxy-tqv7n" (UID: "2915b63c-0548-43ad-9bb0-bea14f95fef9") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:08.131368 kubelet[1990]: E0212 19:44:08.131319 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:08.134871 env[1106]: time="2024-02-12T19:44:08.134811704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-md7h8,Uid:fdb0c430-20af-475d-8ff7-b47df0e68ff4,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:08.151284 env[1106]: time="2024-02-12T19:44:08.151242426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mnzdv,Uid:8e19b2fb-6e86-46da-8e73-ec2c727ab706,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:44:08.153346 kubelet[1990]: E0212 19:44:08.153104 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:08.155530 env[1106]: time="2024-02-12T19:44:08.155410347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:44:08.179931 env[1106]: time="2024-02-12T19:44:08.179804112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:08.179931 env[1106]: time="2024-02-12T19:44:08.179874314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:08.179931 env[1106]: time="2024-02-12T19:44:08.179898953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:08.181000 env[1106]: time="2024-02-12T19:44:08.180627663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9 pid=2152 runtime=io.containerd.runc.v2 Feb 12 19:44:08.212472 systemd[1]: Started cri-containerd-191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9.scope. Feb 12 19:44:08.257229 env[1106]: time="2024-02-12T19:44:08.256173918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-md7h8,Uid:fdb0c430-20af-475d-8ff7-b47df0e68ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:44:08.258611 kubelet[1990]: E0212 19:44:08.258565 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:08.703813 kubelet[1990]: E0212 19:44:08.703204 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:08.704863 env[1106]: time="2024-02-12T19:44:08.704778510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqv7n,Uid:2915b63c-0548-43ad-9bb0-bea14f95fef9,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:08.739300 env[1106]: time="2024-02-12T19:44:08.739136816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:08.739300 env[1106]: time="2024-02-12T19:44:08.739211356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:08.739300 env[1106]: time="2024-02-12T19:44:08.739225953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:08.740264 env[1106]: time="2024-02-12T19:44:08.740127821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdf3a30e10f50ea386d67da6943974334a15f443422dd59dce280e1fb4e1d6e7 pid=2194 runtime=io.containerd.runc.v2 Feb 12 19:44:08.758668 systemd[1]: Started cri-containerd-fdf3a30e10f50ea386d67da6943974334a15f443422dd59dce280e1fb4e1d6e7.scope. Feb 12 19:44:08.813344 env[1106]: time="2024-02-12T19:44:08.813285386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqv7n,Uid:2915b63c-0548-43ad-9bb0-bea14f95fef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf3a30e10f50ea386d67da6943974334a15f443422dd59dce280e1fb4e1d6e7\"" Feb 12 19:44:08.814771 kubelet[1990]: E0212 19:44:08.814721 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:08.818653 env[1106]: time="2024-02-12T19:44:08.818605505Z" level=info msg="CreateContainer within sandbox \"fdf3a30e10f50ea386d67da6943974334a15f443422dd59dce280e1fb4e1d6e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:44:08.881877 env[1106]: time="2024-02-12T19:44:08.881801866Z" level=info msg="CreateContainer within sandbox \"fdf3a30e10f50ea386d67da6943974334a15f443422dd59dce280e1fb4e1d6e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c53ce95f34653809a39f5d05e2bd7ffe09db253f93cc75863cb56cf1422ed142\"" Feb 12 19:44:08.884191 env[1106]: time="2024-02-12T19:44:08.884131919Z" level=info msg="StartContainer for \"c53ce95f34653809a39f5d05e2bd7ffe09db253f93cc75863cb56cf1422ed142\"" Feb 12 19:44:08.916799 systemd[1]: Started cri-containerd-c53ce95f34653809a39f5d05e2bd7ffe09db253f93cc75863cb56cf1422ed142.scope. Feb 12 19:44:08.977868 env[1106]: time="2024-02-12T19:44:08.977683214Z" level=info msg="StartContainer for \"c53ce95f34653809a39f5d05e2bd7ffe09db253f93cc75863cb56cf1422ed142\" returns successfully" Feb 12 19:44:09.515911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233083806.mount: Deactivated successfully. Feb 12 19:44:09.848759 kubelet[1990]: E0212 19:44:09.848457 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:09.866205 kubelet[1990]: I0212 19:44:09.865572 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tqv7n" podStartSLOduration=3.864723607 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:09.864636493 +0000 UTC m=+15.743309271" watchObservedRunningTime="2024-02-12 19:44:09.864723607 +0000 UTC m=+15.743396383" Feb 12 19:44:10.669238 env[1106]: time="2024-02-12T19:44:10.669078467Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:10.673997 env[1106]: time="2024-02-12T19:44:10.673932512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:10.677788 env[1106]: time="2024-02-12T19:44:10.677713703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:10.679165 env[1106]: time="2024-02-12T19:44:10.679095392Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:44:10.681947 env[1106]: time="2024-02-12T19:44:10.681875118Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:44:10.689322 env[1106]: time="2024-02-12T19:44:10.689253509Z" level=info msg="CreateContainer within sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:44:10.741022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802408731.mount: Deactivated successfully. Feb 12 19:44:10.753564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223880359.mount: Deactivated successfully. Feb 12 19:44:10.755900 env[1106]: time="2024-02-12T19:44:10.755682851Z" level=info msg="CreateContainer within sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\"" Feb 12 19:44:10.759883 env[1106]: time="2024-02-12T19:44:10.759715841Z" level=info msg="StartContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\"" Feb 12 19:44:10.798394 systemd[1]: Started cri-containerd-74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f.scope. Feb 12 19:44:10.854626 kubelet[1990]: E0212 19:44:10.853833 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:10.938997 env[1106]: time="2024-02-12T19:44:10.937975768Z" level=info msg="StartContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" returns successfully" Feb 12 19:44:11.861429 kubelet[1990]: E0212 19:44:11.861384 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:12.864435 kubelet[1990]: E0212 19:44:12.864036 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:14.787651 kubelet[1990]: I0212 19:44:14.787001 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-mnzdv" podStartSLOduration=-9.223372029067862e+09 pod.CreationTimestamp="2024-02-12 19:44:07 +0000 UTC" firstStartedPulling="2024-02-12 19:44:08.15441072 +0000 UTC m=+14.033083474" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:12.003238043 +0000 UTC m=+17.881910828" watchObservedRunningTime="2024-02-12 19:44:14.786914866 +0000 UTC m=+20.665587643" Feb 12 19:44:17.366498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237800643.mount: Deactivated successfully. Feb 12 19:44:22.953116 env[1106]: time="2024-02-12T19:44:22.952445423Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.964661 env[1106]: time="2024-02-12T19:44:22.964591641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.974249 env[1106]: time="2024-02-12T19:44:22.974151144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:22.984206 env[1106]: time="2024-02-12T19:44:22.975197691Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:44:22.984206 env[1106]: time="2024-02-12T19:44:22.981065193Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:44:23.012604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864286156.mount: Deactivated successfully. Feb 12 19:44:23.035421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965151370.mount: Deactivated successfully. Feb 12 19:44:23.063872 env[1106]: time="2024-02-12T19:44:23.063438588Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\"" Feb 12 19:44:23.078102 env[1106]: time="2024-02-12T19:44:23.078029036Z" level=info msg="StartContainer for \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\"" Feb 12 19:44:23.131309 systemd[1]: Started cri-containerd-29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53.scope. Feb 12 19:44:23.235496 env[1106]: time="2024-02-12T19:44:23.235419565Z" level=info msg="StartContainer for \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\" returns successfully" Feb 12 19:44:23.289110 systemd[1]: cri-containerd-29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53.scope: Deactivated successfully. Feb 12 19:44:23.473609 env[1106]: time="2024-02-12T19:44:23.473518703Z" level=info msg="shim disconnected" id=29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53 Feb 12 19:44:23.474081 env[1106]: time="2024-02-12T19:44:23.474027419Z" level=warning msg="cleaning up after shim disconnected" id=29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53 namespace=k8s.io Feb 12 19:44:23.474254 env[1106]: time="2024-02-12T19:44:23.474231417Z" level=info msg="cleaning up dead shim" Feb 12 19:44:23.501425 env[1106]: time="2024-02-12T19:44:23.499853397Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2456 runtime=io.containerd.runc.v2\n" Feb 12 19:44:23.914958 kubelet[1990]: E0212 19:44:23.914600 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:23.922271 env[1106]: time="2024-02-12T19:44:23.922202230Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:44:23.964157 env[1106]: time="2024-02-12T19:44:23.964069114Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\"" Feb 12 19:44:23.969382 env[1106]: time="2024-02-12T19:44:23.965864731Z" level=info msg="StartContainer for \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\"" Feb 12 19:44:24.008916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53-rootfs.mount: Deactivated successfully. Feb 12 19:44:24.024890 systemd[1]: run-containerd-runc-k8s.io-993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127-runc.gRB2Bt.mount: Deactivated successfully. Feb 12 19:44:24.040518 systemd[1]: Started cri-containerd-993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127.scope. Feb 12 19:44:24.106063 env[1106]: time="2024-02-12T19:44:24.105991750Z" level=info msg="StartContainer for \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\" returns successfully" Feb 12 19:44:24.108385 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:44:24.108969 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:44:24.109242 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:44:24.113467 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:24.138787 systemd[1]: cri-containerd-993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127.scope: Deactivated successfully. Feb 12 19:44:24.158536 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:24.196781 env[1106]: time="2024-02-12T19:44:24.196563237Z" level=info msg="shim disconnected" id=993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127 Feb 12 19:44:24.196781 env[1106]: time="2024-02-12T19:44:24.196627157Z" level=warning msg="cleaning up after shim disconnected" id=993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127 namespace=k8s.io Feb 12 19:44:24.196781 env[1106]: time="2024-02-12T19:44:24.196641021Z" level=info msg="cleaning up dead shim" Feb 12 19:44:24.218590 env[1106]: time="2024-02-12T19:44:24.218393098Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2520 runtime=io.containerd.runc.v2\n" Feb 12 19:44:24.920178 kubelet[1990]: E0212 19:44:24.920138 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:24.931951 env[1106]: time="2024-02-12T19:44:24.929980528Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:44:24.968467 env[1106]: time="2024-02-12T19:44:24.968411146Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\"" Feb 12 19:44:24.970128 env[1106]: time="2024-02-12T19:44:24.970065243Z" level=info msg="StartContainer for \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\"" Feb 12 19:44:24.997543 systemd[1]: Started cri-containerd-c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617.scope. Feb 12 19:44:25.008003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127-rootfs.mount: Deactivated successfully. Feb 12 19:44:25.068474 env[1106]: time="2024-02-12T19:44:25.068083393Z" level=info msg="StartContainer for \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\" returns successfully" Feb 12 19:44:25.073892 systemd[1]: cri-containerd-c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617.scope: Deactivated successfully. Feb 12 19:44:25.108054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617-rootfs.mount: Deactivated successfully. Feb 12 19:44:25.122105 env[1106]: time="2024-02-12T19:44:25.122038350Z" level=info msg="shim disconnected" id=c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617 Feb 12 19:44:25.122105 env[1106]: time="2024-02-12T19:44:25.122100978Z" level=warning msg="cleaning up after shim disconnected" id=c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617 namespace=k8s.io Feb 12 19:44:25.122524 env[1106]: time="2024-02-12T19:44:25.122130196Z" level=info msg="cleaning up dead shim" Feb 12 19:44:25.138676 env[1106]: time="2024-02-12T19:44:25.138592793Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2578 runtime=io.containerd.runc.v2\n" Feb 12 19:44:25.925619 kubelet[1990]: E0212 19:44:25.925585 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:25.932127 env[1106]: time="2024-02-12T19:44:25.932078272Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:44:25.968391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926651726.mount: Deactivated successfully. Feb 12 19:44:25.980990 env[1106]: time="2024-02-12T19:44:25.980895622Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\"" Feb 12 19:44:25.982724 env[1106]: time="2024-02-12T19:44:25.982666047Z" level=info msg="StartContainer for \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\"" Feb 12 19:44:26.033235 systemd[1]: run-containerd-runc-k8s.io-5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4-runc.hWOK0N.mount: Deactivated successfully. Feb 12 19:44:26.037741 systemd[1]: Started cri-containerd-5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4.scope. Feb 12 19:44:26.082198 systemd[1]: cri-containerd-5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4.scope: Deactivated successfully. Feb 12 19:44:26.087404 env[1106]: time="2024-02-12T19:44:26.087336310Z" level=info msg="StartContainer for \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\" returns successfully" Feb 12 19:44:26.121962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4-rootfs.mount: Deactivated successfully. Feb 12 19:44:26.139459 env[1106]: time="2024-02-12T19:44:26.139393297Z" level=info msg="shim disconnected" id=5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4 Feb 12 19:44:26.140082 env[1106]: time="2024-02-12T19:44:26.140028937Z" level=warning msg="cleaning up after shim disconnected" id=5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4 namespace=k8s.io Feb 12 19:44:26.140237 env[1106]: time="2024-02-12T19:44:26.140212515Z" level=info msg="cleaning up dead shim" Feb 12 19:44:26.156319 env[1106]: time="2024-02-12T19:44:26.156201270Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2634 runtime=io.containerd.runc.v2\n" Feb 12 19:44:26.943970 kubelet[1990]: E0212 19:44:26.943894 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:26.962541 env[1106]: time="2024-02-12T19:44:26.962365287Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:44:27.026712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083695001.mount: Deactivated successfully. Feb 12 19:44:27.041663 env[1106]: time="2024-02-12T19:44:27.041495918Z" level=info msg="CreateContainer within sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\"" Feb 12 19:44:27.042629 env[1106]: time="2024-02-12T19:44:27.042451764Z" level=info msg="StartContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\"" Feb 12 19:44:27.090845 systemd[1]: Started cri-containerd-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652.scope. Feb 12 19:44:27.167383 env[1106]: time="2024-02-12T19:44:27.167305821Z" level=info msg="StartContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" returns successfully" Feb 12 19:44:27.451483 kubelet[1990]: I0212 19:44:27.450040 1990 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:44:27.489405 kubelet[1990]: I0212 19:44:27.489325 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:27.496763 kubelet[1990]: I0212 19:44:27.496524 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:27.498889 systemd[1]: Created slice kubepods-burstable-podfa21ef8a_316a_4ac2_9761_7e921ab6c1e8.slice. Feb 12 19:44:27.513572 systemd[1]: Created slice kubepods-burstable-pod66f54bbc_e513_4651_8435_56e00b571cae.slice. Feb 12 19:44:27.628453 kubelet[1990]: I0212 19:44:27.628389 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdp2g\" (UniqueName: \"kubernetes.io/projected/fa21ef8a-316a-4ac2-9761-7e921ab6c1e8-kube-api-access-cdp2g\") pod \"coredns-787d4945fb-xfbnw\" (UID: \"fa21ef8a-316a-4ac2-9761-7e921ab6c1e8\") " pod="kube-system/coredns-787d4945fb-xfbnw" Feb 12 19:44:27.628793 kubelet[1990]: I0212 19:44:27.628774 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa21ef8a-316a-4ac2-9761-7e921ab6c1e8-config-volume\") pod \"coredns-787d4945fb-xfbnw\" (UID: \"fa21ef8a-316a-4ac2-9761-7e921ab6c1e8\") " pod="kube-system/coredns-787d4945fb-xfbnw" Feb 12 19:44:27.628962 kubelet[1990]: I0212 19:44:27.628947 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f54bbc-e513-4651-8435-56e00b571cae-config-volume\") pod \"coredns-787d4945fb-zw2cf\" (UID: \"66f54bbc-e513-4651-8435-56e00b571cae\") " pod="kube-system/coredns-787d4945fb-zw2cf" Feb 12 19:44:27.629083 kubelet[1990]: I0212 19:44:27.629070 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwk4n\" (UniqueName: \"kubernetes.io/projected/66f54bbc-e513-4651-8435-56e00b571cae-kube-api-access-gwk4n\") pod \"coredns-787d4945fb-zw2cf\" (UID: \"66f54bbc-e513-4651-8435-56e00b571cae\") " pod="kube-system/coredns-787d4945fb-zw2cf" Feb 12 19:44:27.808234 kubelet[1990]: E0212 19:44:27.808187 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:27.810044 env[1106]: time="2024-02-12T19:44:27.809972322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xfbnw,Uid:fa21ef8a-316a-4ac2-9761-7e921ab6c1e8,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:27.817909 kubelet[1990]: E0212 19:44:27.817867 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:27.819363 env[1106]: time="2024-02-12T19:44:27.818903042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zw2cf,Uid:66f54bbc-e513-4651-8435-56e00b571cae,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:27.955969 kubelet[1990]: E0212 19:44:27.955365 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:27.986994 kubelet[1990]: I0212 19:44:27.986940 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-md7h8" podStartSLOduration=-9.223372014867918e+09 pod.CreationTimestamp="2024-02-12 19:44:06 +0000 UTC" firstStartedPulling="2024-02-12 19:44:08.261402986 +0000 UTC m=+14.140075741" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:27.986552626 +0000 UTC m=+33.865225410" watchObservedRunningTime="2024-02-12 19:44:27.986857225 +0000 UTC m=+33.865530007" Feb 12 19:44:28.029547 systemd[1]: run-containerd-runc-k8s.io-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652-runc.qlQaqX.mount: Deactivated successfully. Feb 12 19:44:28.959249 kubelet[1990]: E0212 19:44:28.958675 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:29.960539 kubelet[1990]: E0212 19:44:29.960495 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:30.091959 systemd-networkd[1004]: cilium_host: Link UP Feb 12 19:44:30.099157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:44:30.099340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:44:30.100148 systemd-networkd[1004]: cilium_net: Link UP Feb 12 19:44:30.101034 systemd-networkd[1004]: cilium_net: Gained carrier Feb 12 19:44:30.102517 systemd-networkd[1004]: cilium_host: Gained carrier Feb 12 19:44:30.428509 systemd-networkd[1004]: cilium_vxlan: Link UP Feb 12 19:44:30.428517 systemd-networkd[1004]: cilium_vxlan: Gained carrier Feb 12 19:44:30.675039 systemd-networkd[1004]: cilium_net: Gained IPv6LL Feb 12 19:44:30.931020 systemd-networkd[1004]: cilium_host: Gained IPv6LL Feb 12 19:44:31.021772 kernel: NET: Registered PF_ALG protocol family Feb 12 19:44:31.763093 systemd-networkd[1004]: cilium_vxlan: Gained IPv6LL Feb 12 19:44:32.157999 systemd-networkd[1004]: lxc_health: Link UP Feb 12 19:44:32.169150 kubelet[1990]: E0212 19:44:32.169114 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:32.173992 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:44:32.173501 systemd-networkd[1004]: lxc_health: Gained carrier Feb 12 19:44:32.431759 systemd-networkd[1004]: lxcc0ea71122c48: Link UP Feb 12 19:44:32.438766 kernel: eth0: renamed from tmpd4fda Feb 12 19:44:32.449669 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc0ea71122c48: link becomes ready Feb 12 19:44:32.446680 systemd-networkd[1004]: lxcc0ea71122c48: Gained carrier Feb 12 19:44:32.496227 systemd-networkd[1004]: lxc42a5e45f7bf8: Link UP Feb 12 19:44:32.504468 kernel: eth0: renamed from tmp2a822 Feb 12 19:44:32.508951 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc42a5e45f7bf8: link becomes ready Feb 12 19:44:32.510076 systemd-networkd[1004]: lxc42a5e45f7bf8: Gained carrier Feb 12 19:44:32.967551 kubelet[1990]: E0212 19:44:32.967490 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:33.938937 systemd-networkd[1004]: lxc42a5e45f7bf8: Gained IPv6LL Feb 12 19:44:33.939325 systemd-networkd[1004]: lxc_health: Gained IPv6LL Feb 12 19:44:34.258985 systemd-networkd[1004]: lxcc0ea71122c48: Gained IPv6LL Feb 12 19:44:38.848971 env[1106]: time="2024-02-12T19:44:38.833831967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:38.848971 env[1106]: time="2024-02-12T19:44:38.833903323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:38.848971 env[1106]: time="2024-02-12T19:44:38.833921472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:38.848971 env[1106]: time="2024-02-12T19:44:38.834171609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28 pid=3190 runtime=io.containerd.runc.v2 Feb 12 19:44:38.884864 env[1106]: time="2024-02-12T19:44:38.883036838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:38.884864 env[1106]: time="2024-02-12T19:44:38.883115989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:38.884864 env[1106]: time="2024-02-12T19:44:38.883128024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:38.885514 env[1106]: time="2024-02-12T19:44:38.885443741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4fdab82adfa80dd1646486a69df479e9e93ea65e34815801810069e279d7889 pid=3209 runtime=io.containerd.runc.v2 Feb 12 19:44:38.910320 systemd[1]: run-containerd-runc-k8s.io-2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28-runc.VfknHC.mount: Deactivated successfully. Feb 12 19:44:38.926377 systemd[1]: Started cri-containerd-2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28.scope. Feb 12 19:44:38.962813 systemd[1]: Started cri-containerd-d4fdab82adfa80dd1646486a69df479e9e93ea65e34815801810069e279d7889.scope. Feb 12 19:44:39.103031 env[1106]: time="2024-02-12T19:44:39.102816189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zw2cf,Uid:66f54bbc-e513-4651-8435-56e00b571cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28\"" Feb 12 19:44:39.105368 kubelet[1990]: E0212 19:44:39.104628 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:39.110585 env[1106]: time="2024-02-12T19:44:39.110501516Z" level=info msg="CreateContainer within sandbox \"2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:44:39.134361 env[1106]: time="2024-02-12T19:44:39.134298264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xfbnw,Uid:fa21ef8a-316a-4ac2-9761-7e921ab6c1e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4fdab82adfa80dd1646486a69df479e9e93ea65e34815801810069e279d7889\"" Feb 12 19:44:39.136618 kubelet[1990]: E0212 19:44:39.135903 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:39.142882 env[1106]: time="2024-02-12T19:44:39.142820556Z" level=info msg="CreateContainer within sandbox \"d4fdab82adfa80dd1646486a69df479e9e93ea65e34815801810069e279d7889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:44:39.206543 env[1106]: time="2024-02-12T19:44:39.206448135Z" level=info msg="CreateContainer within sandbox \"2a822fa748fec266ed42f3ac2591820dcef3d2f5c09a80e140b09b0da227ab28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc98e748fadb5baa87a145d0efb2d86dd453605f59e78adc7f967f564d77d14b\"" Feb 12 19:44:39.207864 env[1106]: time="2024-02-12T19:44:39.207813836Z" level=info msg="StartContainer for \"bc98e748fadb5baa87a145d0efb2d86dd453605f59e78adc7f967f564d77d14b\"" Feb 12 19:44:39.216521 env[1106]: time="2024-02-12T19:44:39.216431335Z" level=info msg="CreateContainer within sandbox \"d4fdab82adfa80dd1646486a69df479e9e93ea65e34815801810069e279d7889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40c294ce630d50050149ff98d70af0e6ad12ebaea6021075a5c0047e52a371a5\"" Feb 12 19:44:39.218521 env[1106]: time="2024-02-12T19:44:39.218454905Z" level=info msg="StartContainer for \"40c294ce630d50050149ff98d70af0e6ad12ebaea6021075a5c0047e52a371a5\"" Feb 12 19:44:39.285313 systemd[1]: Started cri-containerd-bc98e748fadb5baa87a145d0efb2d86dd453605f59e78adc7f967f564d77d14b.scope. Feb 12 19:44:39.307086 systemd[1]: Started cri-containerd-40c294ce630d50050149ff98d70af0e6ad12ebaea6021075a5c0047e52a371a5.scope. Feb 12 19:44:39.429302 env[1106]: time="2024-02-12T19:44:39.428237272Z" level=info msg="StartContainer for \"bc98e748fadb5baa87a145d0efb2d86dd453605f59e78adc7f967f564d77d14b\" returns successfully" Feb 12 19:44:39.458136 env[1106]: time="2024-02-12T19:44:39.458057320Z" level=info msg="StartContainer for \"40c294ce630d50050149ff98d70af0e6ad12ebaea6021075a5c0047e52a371a5\" returns successfully" Feb 12 19:44:39.991988 kubelet[1990]: E0212 19:44:39.991432 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:40.001116 kubelet[1990]: E0212 19:44:40.001051 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:40.059273 kubelet[1990]: I0212 19:44:40.059208 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-zw2cf" podStartSLOduration=33.059141546 pod.CreationTimestamp="2024-02-12 19:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:40.038712948 +0000 UTC m=+45.917385730" watchObservedRunningTime="2024-02-12 19:44:40.059141546 +0000 UTC m=+45.937814321" Feb 12 19:44:40.138988 kubelet[1990]: I0212 19:44:40.138940 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xfbnw" podStartSLOduration=33.138869668 pod.CreationTimestamp="2024-02-12 19:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:40.101806374 +0000 UTC m=+45.980479151" watchObservedRunningTime="2024-02-12 19:44:40.138869668 +0000 UTC m=+46.017542447" Feb 12 19:44:41.002642 kubelet[1990]: E0212 19:44:41.002595 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:41.004492 kubelet[1990]: E0212 19:44:41.004435 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:44:42.006314 kubelet[1990]: E0212 19:44:42.006269 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:06.770396 kubelet[1990]: E0212 19:45:06.770332 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:14.780234 kubelet[1990]: E0212 19:45:14.780181 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:16.522772 systemd[1]: Started sshd@5-64.23.173.239:22-139.178.68.195:41132.service. Feb 12 19:45:16.592076 sshd[3404]: Accepted publickey for core from 139.178.68.195 port 41132 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:16.598456 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:16.609373 systemd[1]: Started session-6.scope. Feb 12 19:45:16.610390 systemd-logind[1097]: New session 6 of user core. Feb 12 19:45:16.908203 sshd[3404]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:16.913546 systemd-logind[1097]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:45:16.913763 systemd[1]: sshd@5-64.23.173.239:22-139.178.68.195:41132.service: Deactivated successfully. Feb 12 19:45:16.915038 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:45:16.916803 systemd-logind[1097]: Removed session 6. Feb 12 19:45:19.766331 kubelet[1990]: E0212 19:45:19.765815 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:21.916274 systemd[1]: Started sshd@6-64.23.173.239:22-139.178.68.195:41138.service. Feb 12 19:45:21.974588 sshd[3417]: Accepted publickey for core from 139.178.68.195 port 41138 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:21.977856 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:21.985561 systemd[1]: Started session-7.scope. Feb 12 19:45:21.986223 systemd-logind[1097]: New session 7 of user core. Feb 12 19:45:22.169806 sshd[3417]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:22.179614 systemd[1]: sshd@6-64.23.173.239:22-139.178.68.195:41138.service: Deactivated successfully. Feb 12 19:45:22.182039 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:45:22.183662 systemd-logind[1097]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:45:22.185924 systemd-logind[1097]: Removed session 7. Feb 12 19:45:27.182350 systemd[1]: Started sshd@7-64.23.173.239:22-139.178.68.195:41084.service. Feb 12 19:45:27.252232 sshd[3430]: Accepted publickey for core from 139.178.68.195 port 41084 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:27.255227 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:27.264042 systemd[1]: Started session-8.scope. Feb 12 19:45:27.265630 systemd-logind[1097]: New session 8 of user core. Feb 12 19:45:27.495597 sshd[3430]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:27.507640 systemd[1]: sshd@7-64.23.173.239:22-139.178.68.195:41084.service: Deactivated successfully. Feb 12 19:45:27.509282 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:45:27.510801 systemd-logind[1097]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:45:27.512304 systemd-logind[1097]: Removed session 8. Feb 12 19:45:28.766435 kubelet[1990]: E0212 19:45:28.766386 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:32.509389 systemd[1]: Started sshd@8-64.23.173.239:22-139.178.68.195:41100.service. Feb 12 19:45:32.596226 sshd[3443]: Accepted publickey for core from 139.178.68.195 port 41100 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:32.599632 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:32.619749 systemd[1]: Started session-9.scope. Feb 12 19:45:32.620509 systemd-logind[1097]: New session 9 of user core. Feb 12 19:45:32.767066 kubelet[1990]: E0212 19:45:32.766901 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:32.857110 sshd[3443]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:32.862491 systemd-logind[1097]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:45:32.863207 systemd[1]: sshd@8-64.23.173.239:22-139.178.68.195:41100.service: Deactivated successfully. Feb 12 19:45:32.864446 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:45:32.865965 systemd-logind[1097]: Removed session 9. Feb 12 19:45:37.865510 systemd[1]: Started sshd@9-64.23.173.239:22-139.178.68.195:32882.service. Feb 12 19:45:37.917724 sshd[3456]: Accepted publickey for core from 139.178.68.195 port 32882 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:37.920591 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:37.932572 systemd-logind[1097]: New session 10 of user core. Feb 12 19:45:37.939146 systemd[1]: Started session-10.scope. Feb 12 19:45:38.153173 sshd[3456]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:38.157976 systemd[1]: sshd@9-64.23.173.239:22-139.178.68.195:32882.service: Deactivated successfully. Feb 12 19:45:38.158954 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:45:38.160167 systemd-logind[1097]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:45:38.161687 systemd-logind[1097]: Removed session 10. Feb 12 19:45:43.163885 systemd[1]: Started sshd@10-64.23.173.239:22-139.178.68.195:32892.service. Feb 12 19:45:43.270098 sshd[3472]: Accepted publickey for core from 139.178.68.195 port 32892 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:43.273658 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:43.284317 systemd-logind[1097]: New session 11 of user core. Feb 12 19:45:43.285555 systemd[1]: Started session-11.scope. Feb 12 19:45:43.478639 sshd[3472]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:43.491653 systemd[1]: Started sshd@11-64.23.173.239:22-139.178.68.195:32898.service. Feb 12 19:45:43.493662 systemd[1]: sshd@10-64.23.173.239:22-139.178.68.195:32892.service: Deactivated successfully. Feb 12 19:45:43.495312 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:45:43.498202 systemd-logind[1097]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:45:43.501537 systemd-logind[1097]: Removed session 11. Feb 12 19:45:43.556697 sshd[3484]: Accepted publickey for core from 139.178.68.195 port 32898 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:43.560154 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:43.569690 systemd[1]: Started session-12.scope. Feb 12 19:45:43.571947 systemd-logind[1097]: New session 12 of user core. Feb 12 19:45:45.064328 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:45.078698 systemd[1]: Started sshd@12-64.23.173.239:22-139.178.68.195:32902.service. Feb 12 19:45:45.081872 systemd[1]: sshd@11-64.23.173.239:22-139.178.68.195:32898.service: Deactivated successfully. Feb 12 19:45:45.086573 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:45:45.090784 systemd-logind[1097]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:45:45.093040 systemd-logind[1097]: Removed session 12. Feb 12 19:45:45.162472 sshd[3494]: Accepted publickey for core from 139.178.68.195 port 32902 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:45.165658 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:45.175367 systemd-logind[1097]: New session 13 of user core. Feb 12 19:45:45.176759 systemd[1]: Started session-13.scope. Feb 12 19:45:45.386672 sshd[3494]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:45.392493 systemd[1]: sshd@12-64.23.173.239:22-139.178.68.195:32902.service: Deactivated successfully. Feb 12 19:45:45.393895 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:45:45.395319 systemd-logind[1097]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:45:45.397637 systemd-logind[1097]: Removed session 13. Feb 12 19:45:45.766377 kubelet[1990]: E0212 19:45:45.766160 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:50.395821 systemd[1]: Started sshd@13-64.23.173.239:22-139.178.68.195:50010.service. Feb 12 19:45:50.446594 sshd[3506]: Accepted publickey for core from 139.178.68.195 port 50010 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:50.449217 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:50.456011 systemd[1]: Started session-14.scope. Feb 12 19:45:50.456890 systemd-logind[1097]: New session 14 of user core. Feb 12 19:45:50.662006 sshd[3506]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:50.668427 systemd[1]: sshd@13-64.23.173.239:22-139.178.68.195:50010.service: Deactivated successfully. Feb 12 19:45:50.671002 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:45:50.674361 systemd-logind[1097]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:45:50.676167 systemd-logind[1097]: Removed session 14. Feb 12 19:45:50.766074 kubelet[1990]: E0212 19:45:50.766024 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:45:55.672267 systemd[1]: Started sshd@14-64.23.173.239:22-139.178.68.195:50018.service. Feb 12 19:45:55.736177 sshd[3520]: Accepted publickey for core from 139.178.68.195 port 50018 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:55.739931 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:55.749315 systemd[1]: Started session-15.scope. Feb 12 19:45:55.750327 systemd-logind[1097]: New session 15 of user core. Feb 12 19:45:55.947951 sshd[3520]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:55.956604 systemd[1]: Started sshd@15-64.23.173.239:22-139.178.68.195:50022.service. Feb 12 19:45:55.961383 systemd[1]: sshd@14-64.23.173.239:22-139.178.68.195:50018.service: Deactivated successfully. Feb 12 19:45:55.963197 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:45:55.967209 systemd-logind[1097]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:45:55.969998 systemd-logind[1097]: Removed session 15. Feb 12 19:45:56.025511 sshd[3531]: Accepted publickey for core from 139.178.68.195 port 50022 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:56.028421 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:56.037463 systemd[1]: Started session-16.scope. Feb 12 19:45:56.038701 systemd-logind[1097]: New session 16 of user core. Feb 12 19:45:56.565313 sshd[3531]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:56.575657 systemd[1]: Started sshd@16-64.23.173.239:22-139.178.68.195:49952.service. Feb 12 19:45:56.577629 systemd[1]: sshd@15-64.23.173.239:22-139.178.68.195:50022.service: Deactivated successfully. Feb 12 19:45:56.578586 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:45:56.579908 systemd-logind[1097]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:45:56.581632 systemd-logind[1097]: Removed session 16. Feb 12 19:45:56.654571 sshd[3541]: Accepted publickey for core from 139.178.68.195 port 49952 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:56.657392 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:56.665877 systemd-logind[1097]: New session 17 of user core. Feb 12 19:45:56.666841 systemd[1]: Started session-17.scope. Feb 12 19:45:58.057445 sshd[3541]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:58.069130 systemd[1]: Started sshd@17-64.23.173.239:22-139.178.68.195:49958.service. Feb 12 19:45:58.074371 systemd[1]: sshd@16-64.23.173.239:22-139.178.68.195:49952.service: Deactivated successfully. Feb 12 19:45:58.083887 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:45:58.086459 systemd-logind[1097]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:45:58.089511 systemd-logind[1097]: Removed session 17. Feb 12 19:45:58.143238 sshd[3565]: Accepted publickey for core from 139.178.68.195 port 49958 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:58.146804 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:58.153651 systemd-logind[1097]: New session 18 of user core. Feb 12 19:45:58.155553 systemd[1]: Started session-18.scope. Feb 12 19:45:58.679116 sshd[3565]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:58.691360 systemd[1]: Started sshd@18-64.23.173.239:22-139.178.68.195:49962.service. Feb 12 19:45:58.699891 systemd[1]: sshd@17-64.23.173.239:22-139.178.68.195:49958.service: Deactivated successfully. Feb 12 19:45:58.702998 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:45:58.708561 systemd-logind[1097]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:45:58.714977 systemd-logind[1097]: Removed session 18. Feb 12 19:45:58.758157 sshd[3620]: Accepted publickey for core from 139.178.68.195 port 49962 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:45:58.761458 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:58.772385 systemd-logind[1097]: New session 19 of user core. Feb 12 19:45:58.772758 systemd[1]: Started session-19.scope. Feb 12 19:45:58.989795 sshd[3620]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:58.996052 systemd[1]: sshd@18-64.23.173.239:22-139.178.68.195:49962.service: Deactivated successfully. Feb 12 19:45:58.997128 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:45:58.998464 systemd-logind[1097]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:45:59.004806 systemd-logind[1097]: Removed session 19. Feb 12 19:46:03.766205 kubelet[1990]: E0212 19:46:03.766141 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:04.000668 systemd[1]: Started sshd@19-64.23.173.239:22-139.178.68.195:49970.service. Feb 12 19:46:04.057938 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 49970 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:04.061525 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:04.071047 systemd[1]: Started session-20.scope. Feb 12 19:46:04.072575 systemd-logind[1097]: New session 20 of user core. Feb 12 19:46:04.250292 sshd[3633]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:04.256412 systemd[1]: sshd@19-64.23.173.239:22-139.178.68.195:49970.service: Deactivated successfully. Feb 12 19:46:04.257991 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:46:04.259003 systemd-logind[1097]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:46:04.260383 systemd-logind[1097]: Removed session 20. Feb 12 19:46:09.260320 systemd[1]: Started sshd@20-64.23.173.239:22-139.178.68.195:48666.service. Feb 12 19:46:09.319616 sshd[3673]: Accepted publickey for core from 139.178.68.195 port 48666 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:09.322300 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:09.330005 systemd[1]: Started session-21.scope. Feb 12 19:46:09.331874 systemd-logind[1097]: New session 21 of user core. Feb 12 19:46:09.506403 sshd[3673]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:09.511182 systemd[1]: sshd@20-64.23.173.239:22-139.178.68.195:48666.service: Deactivated successfully. Feb 12 19:46:09.512257 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:46:09.513993 systemd-logind[1097]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:46:09.515491 systemd-logind[1097]: Removed session 21. Feb 12 19:46:14.516994 systemd[1]: Started sshd@21-64.23.173.239:22-139.178.68.195:48680.service. Feb 12 19:46:14.568231 sshd[3687]: Accepted publickey for core from 139.178.68.195 port 48680 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:14.572978 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:14.585874 systemd-logind[1097]: New session 22 of user core. Feb 12 19:46:14.585971 systemd[1]: Started session-22.scope. Feb 12 19:46:14.825537 sshd[3687]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:14.834859 systemd[1]: sshd@21-64.23.173.239:22-139.178.68.195:48680.service: Deactivated successfully. Feb 12 19:46:14.836314 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:46:14.838699 systemd-logind[1097]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:46:14.841001 systemd-logind[1097]: Removed session 22. Feb 12 19:46:17.766331 kubelet[1990]: E0212 19:46:17.766288 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:19.840872 systemd[1]: Started sshd@22-64.23.173.239:22-139.178.68.195:45120.service. Feb 12 19:46:19.894814 sshd[3699]: Accepted publickey for core from 139.178.68.195 port 45120 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:19.900586 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:19.914530 systemd-logind[1097]: New session 23 of user core. Feb 12 19:46:19.916215 systemd[1]: Started session-23.scope. Feb 12 19:46:20.101652 sshd[3699]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:20.107176 systemd-logind[1097]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:46:20.107566 systemd[1]: sshd@22-64.23.173.239:22-139.178.68.195:45120.service: Deactivated successfully. Feb 12 19:46:20.109017 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:46:20.110835 systemd-logind[1097]: Removed session 23. Feb 12 19:46:25.110460 systemd[1]: Started sshd@23-64.23.173.239:22-139.178.68.195:45136.service. Feb 12 19:46:25.158927 sshd[3711]: Accepted publickey for core from 139.178.68.195 port 45136 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:25.161670 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:25.168836 systemd-logind[1097]: New session 24 of user core. Feb 12 19:46:25.170376 systemd[1]: Started session-24.scope. Feb 12 19:46:25.384946 sshd[3711]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:25.391016 systemd[1]: sshd@23-64.23.173.239:22-139.178.68.195:45136.service: Deactivated successfully. Feb 12 19:46:25.392242 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:46:25.394293 systemd-logind[1097]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:46:25.395571 systemd-logind[1097]: Removed session 24. Feb 12 19:46:30.396794 systemd[1]: Started sshd@24-64.23.173.239:22-139.178.68.195:36440.service. Feb 12 19:46:30.451148 sshd[3723]: Accepted publickey for core from 139.178.68.195 port 36440 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:30.454070 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:30.463705 systemd[1]: Started session-25.scope. Feb 12 19:46:30.465431 systemd-logind[1097]: New session 25 of user core. Feb 12 19:46:30.645351 sshd[3723]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:30.654527 systemd[1]: sshd@24-64.23.173.239:22-139.178.68.195:36440.service: Deactivated successfully. Feb 12 19:46:30.655716 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:46:30.656850 systemd-logind[1097]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:46:30.658316 systemd-logind[1097]: Removed session 25. Feb 12 19:46:32.766053 kubelet[1990]: E0212 19:46:32.766002 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:35.656081 systemd[1]: Started sshd@25-64.23.173.239:22-139.178.68.195:36450.service. Feb 12 19:46:35.715481 sshd[3735]: Accepted publickey for core from 139.178.68.195 port 36450 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:35.718539 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:35.729211 systemd[1]: Started session-26.scope. Feb 12 19:46:35.729965 systemd-logind[1097]: New session 26 of user core. Feb 12 19:46:35.904281 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:35.916150 systemd[1]: Started sshd@26-64.23.173.239:22-139.178.68.195:36454.service. Feb 12 19:46:35.916940 systemd[1]: sshd@25-64.23.173.239:22-139.178.68.195:36450.service: Deactivated successfully. Feb 12 19:46:35.920401 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:46:35.922154 systemd-logind[1097]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:46:35.925049 systemd-logind[1097]: Removed session 26. Feb 12 19:46:35.980688 sshd[3746]: Accepted publickey for core from 139.178.68.195 port 36454 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:35.985074 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:35.994864 systemd-logind[1097]: New session 27 of user core. Feb 12 19:46:35.996121 systemd[1]: Started session-27.scope. Feb 12 19:46:38.532050 env[1106]: time="2024-02-12T19:46:38.531968543Z" level=info msg="StopContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" with timeout 30 (s)" Feb 12 19:46:38.533810 env[1106]: time="2024-02-12T19:46:38.533531253Z" level=info msg="Stop container \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" with signal terminated" Feb 12 19:46:38.593894 systemd[1]: run-containerd-runc-k8s.io-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652-runc.0y9xAQ.mount: Deactivated successfully. Feb 12 19:46:38.598639 systemd[1]: cri-containerd-74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f.scope: Deactivated successfully. Feb 12 19:46:38.658478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f-rootfs.mount: Deactivated successfully. Feb 12 19:46:38.671369 env[1106]: time="2024-02-12T19:46:38.671268535Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:46:38.682476 env[1106]: time="2024-02-12T19:46:38.682407476Z" level=info msg="shim disconnected" id=74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f Feb 12 19:46:38.682710 env[1106]: time="2024-02-12T19:46:38.682492047Z" level=warning msg="cleaning up after shim disconnected" id=74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f namespace=k8s.io Feb 12 19:46:38.682710 env[1106]: time="2024-02-12T19:46:38.682506791Z" level=info msg="cleaning up dead shim" Feb 12 19:46:38.684719 env[1106]: time="2024-02-12T19:46:38.684658586Z" level=info msg="StopContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" with timeout 1 (s)" Feb 12 19:46:38.685976 env[1106]: time="2024-02-12T19:46:38.685886281Z" level=info msg="Stop container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" with signal terminated" Feb 12 19:46:38.703645 systemd-networkd[1004]: lxc_health: Link DOWN Feb 12 19:46:38.703656 systemd-networkd[1004]: lxc_health: Lost carrier Feb 12 19:46:38.751465 env[1106]: time="2024-02-12T19:46:38.751401890Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Feb 12 19:46:38.752607 systemd[1]: cri-containerd-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652.scope: Deactivated successfully. Feb 12 19:46:38.753091 systemd[1]: cri-containerd-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652.scope: Consumed 11.392s CPU time. Feb 12 19:46:38.768288 kubelet[1990]: E0212 19:46:38.768237 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:38.773953 env[1106]: time="2024-02-12T19:46:38.773889053Z" level=info msg="StopContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" returns successfully" Feb 12 19:46:38.781469 env[1106]: time="2024-02-12T19:46:38.778813722Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:46:38.781943 env[1106]: time="2024-02-12T19:46:38.781884662Z" level=info msg="Container to stop \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.785778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b-shm.mount: Deactivated successfully. Feb 12 19:46:38.807817 systemd[1]: cri-containerd-b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b.scope: Deactivated successfully. Feb 12 19:46:38.839351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652-rootfs.mount: Deactivated successfully. Feb 12 19:46:38.878018 env[1106]: time="2024-02-12T19:46:38.877908089Z" level=info msg="shim disconnected" id=6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652 Feb 12 19:46:38.878018 env[1106]: time="2024-02-12T19:46:38.878021626Z" level=warning msg="cleaning up after shim disconnected" id=6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652 namespace=k8s.io Feb 12 19:46:38.878018 env[1106]: time="2024-02-12T19:46:38.878041216Z" level=info msg="cleaning up dead shim" Feb 12 19:46:38.896455 env[1106]: time="2024-02-12T19:46:38.896386640Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3847 runtime=io.containerd.runc.v2\n" Feb 12 19:46:38.925536 env[1106]: time="2024-02-12T19:46:38.923333193Z" level=info msg="shim disconnected" id=b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b Feb 12 19:46:38.925536 env[1106]: time="2024-02-12T19:46:38.923552380Z" level=warning msg="cleaning up after shim disconnected" id=b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b namespace=k8s.io Feb 12 19:46:38.925536 env[1106]: time="2024-02-12T19:46:38.923655744Z" level=info msg="cleaning up dead shim" Feb 12 19:46:38.927004 env[1106]: time="2024-02-12T19:46:38.926942677Z" level=info msg="StopContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" returns successfully" Feb 12 19:46:38.928315 env[1106]: time="2024-02-12T19:46:38.928254790Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:46:38.928529 env[1106]: time="2024-02-12T19:46:38.928350384Z" level=info msg="Container to stop \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.928529 env[1106]: time="2024-02-12T19:46:38.928393647Z" level=info msg="Container to stop \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.928529 env[1106]: time="2024-02-12T19:46:38.928415239Z" level=info msg="Container to stop \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.928529 env[1106]: time="2024-02-12T19:46:38.928435338Z" level=info msg="Container to stop \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.928529 env[1106]: time="2024-02-12T19:46:38.928453804Z" level=info msg="Container to stop \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:38.947827 systemd[1]: cri-containerd-191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9.scope: Deactivated successfully. Feb 12 19:46:38.978976 env[1106]: time="2024-02-12T19:46:38.978898468Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3860 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:46:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 19:46:38.980131 env[1106]: time="2024-02-12T19:46:38.980042898Z" level=info msg="TearDown network for sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" successfully" Feb 12 19:46:38.980131 env[1106]: time="2024-02-12T19:46:38.980116305Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" returns successfully" Feb 12 19:46:39.045725 env[1106]: time="2024-02-12T19:46:39.044548680Z" level=info msg="shim disconnected" id=191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9 Feb 12 19:46:39.045725 env[1106]: time="2024-02-12T19:46:39.044617069Z" level=warning msg="cleaning up after shim disconnected" id=191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9 namespace=k8s.io Feb 12 19:46:39.045725 env[1106]: time="2024-02-12T19:46:39.044636390Z" level=info msg="cleaning up dead shim" Feb 12 19:46:39.065355 env[1106]: time="2024-02-12T19:46:39.065066034Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3891 runtime=io.containerd.runc.v2\n" Feb 12 19:46:39.066250 env[1106]: time="2024-02-12T19:46:39.066190687Z" level=info msg="TearDown network for sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" successfully" Feb 12 19:46:39.066250 env[1106]: time="2024-02-12T19:46:39.066242527Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" returns successfully" Feb 12 19:46:39.134076 kubelet[1990]: I0212 19:46:39.134021 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e19b2fb-6e86-46da-8e73-ec2c727ab706-cilium-config-path\") pod \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\" (UID: \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\") " Feb 12 19:46:39.134577 kubelet[1990]: I0212 19:46:39.134544 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hostproc\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.134805 kubelet[1990]: I0212 19:46:39.134788 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-net\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.134991 kubelet[1990]: I0212 19:46:39.134977 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hubble-tls\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.135142 kubelet[1990]: I0212 19:46:39.135124 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-xtables-lock\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.135328 kubelet[1990]: I0212 19:46:39.135291 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-etc-cni-netd\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.135533 kubelet[1990]: I0212 19:46:39.135507 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv97b\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-kube-api-access-mv97b\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.135705 kubelet[1990]: I0212 19:46:39.135693 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w8l8\" (UniqueName: \"kubernetes.io/projected/8e19b2fb-6e86-46da-8e73-ec2c727ab706-kube-api-access-7w8l8\") pod \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\" (UID: \"8e19b2fb-6e86-46da-8e73-ec2c727ab706\") " Feb 12 19:46:39.135883 kubelet[1990]: I0212 19:46:39.135868 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-bpf-maps\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.136035 kubelet[1990]: I0212 19:46:39.136022 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-run\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.136463 kubelet[1990]: W0212 19:46:39.136249 1990 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8e19b2fb-6e86-46da-8e73-ec2c727ab706/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:46:39.136982 kubelet[1990]: I0212 19:46:39.136611 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.137299 kubelet[1990]: I0212 19:46:39.137276 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hostproc" (OuterVolumeSpecName: "hostproc") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.137486 kubelet[1990]: I0212 19:46:39.137465 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.139459 kubelet[1990]: I0212 19:46:39.139392 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e19b2fb-6e86-46da-8e73-ec2c727ab706-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e19b2fb-6e86-46da-8e73-ec2c727ab706" (UID: "8e19b2fb-6e86-46da-8e73-ec2c727ab706"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:46:39.140126 kubelet[1990]: I0212 19:46:39.139819 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.140126 kubelet[1990]: I0212 19:46:39.139879 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.140378 kubelet[1990]: I0212 19:46:39.140226 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.154458 kubelet[1990]: I0212 19:46:39.154357 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e19b2fb-6e86-46da-8e73-ec2c727ab706-kube-api-access-7w8l8" (OuterVolumeSpecName: "kube-api-access-7w8l8") pod "8e19b2fb-6e86-46da-8e73-ec2c727ab706" (UID: "8e19b2fb-6e86-46da-8e73-ec2c727ab706"). InnerVolumeSpecName "kube-api-access-7w8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:46:39.155767 kubelet[1990]: I0212 19:46:39.155645 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:46:39.156626 kubelet[1990]: I0212 19:46:39.156545 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-kube-api-access-mv97b" (OuterVolumeSpecName: "kube-api-access-mv97b") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "kube-api-access-mv97b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237423 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cni-path\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237506 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-lib-modules\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237544 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-cgroup\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237581 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-kernel\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237680 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdb0c430-20af-475d-8ff7-b47df0e68ff4-clustermesh-secrets\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.246779 kubelet[1990]: I0212 19:46:39.237726 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-config-path\") pod \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\" (UID: \"fdb0c430-20af-475d-8ff7-b47df0e68ff4\") " Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237845 1990 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mv97b\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-kube-api-access-mv97b\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237886 1990 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-7w8l8\" (UniqueName: \"kubernetes.io/projected/8e19b2fb-6e86-46da-8e73-ec2c727ab706-kube-api-access-7w8l8\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237904 1990 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-bpf-maps\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237919 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-run\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237937 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e19b2fb-6e86-46da-8e73-ec2c727ab706-cilium-config-path\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237949 1990 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hostproc\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237963 1990 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdb0c430-20af-475d-8ff7-b47df0e68ff4-hubble-tls\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247253 kubelet[1990]: I0212 19:46:39.237976 1990 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-net\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247608 kubelet[1990]: I0212 19:46:39.237989 1990 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-xtables-lock\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247608 kubelet[1990]: I0212 19:46:39.238007 1990 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-etc-cni-netd\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.247608 kubelet[1990]: W0212 19:46:39.238363 1990 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fdb0c430-20af-475d-8ff7-b47df0e68ff4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:46:39.247608 kubelet[1990]: I0212 19:46:39.240978 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:46:39.247608 kubelet[1990]: I0212 19:46:39.241075 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cni-path" (OuterVolumeSpecName: "cni-path") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.247608 kubelet[1990]: I0212 19:46:39.241141 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.247927 kubelet[1990]: I0212 19:46:39.241165 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.247927 kubelet[1990]: I0212 19:46:39.241185 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:39.247927 kubelet[1990]: I0212 19:46:39.246111 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb0c430-20af-475d-8ff7-b47df0e68ff4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fdb0c430-20af-475d-8ff7-b47df0e68ff4" (UID: "fdb0c430-20af-475d-8ff7-b47df0e68ff4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339236 1990 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdb0c430-20af-475d-8ff7-b47df0e68ff4-clustermesh-secrets\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339312 1990 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-host-proc-sys-kernel\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339334 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-config-path\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339350 1990 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-lib-modules\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339383 1990 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cni-path\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.339453 kubelet[1990]: I0212 19:46:39.339400 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdb0c430-20af-475d-8ff7-b47df0e68ff4-cilium-cgroup\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:39.433503 kubelet[1990]: I0212 19:46:39.433449 1990 scope.go:115] "RemoveContainer" containerID="6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652" Feb 12 19:46:39.445291 env[1106]: time="2024-02-12T19:46:39.445069106Z" level=info msg="RemoveContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\"" Feb 12 19:46:39.446285 systemd[1]: Removed slice kubepods-burstable-podfdb0c430_20af_475d_8ff7_b47df0e68ff4.slice. Feb 12 19:46:39.446498 systemd[1]: kubepods-burstable-podfdb0c430_20af_475d_8ff7_b47df0e68ff4.slice: Consumed 11.560s CPU time. Feb 12 19:46:39.458404 env[1106]: time="2024-02-12T19:46:39.458332980Z" level=info msg="RemoveContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" returns successfully" Feb 12 19:46:39.461354 kubelet[1990]: I0212 19:46:39.461262 1990 scope.go:115] "RemoveContainer" containerID="5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4" Feb 12 19:46:39.462029 systemd[1]: Removed slice kubepods-besteffort-pod8e19b2fb_6e86_46da_8e73_ec2c727ab706.slice. Feb 12 19:46:39.467405 env[1106]: time="2024-02-12T19:46:39.467262414Z" level=info msg="RemoveContainer for \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\"" Feb 12 19:46:39.483088 env[1106]: time="2024-02-12T19:46:39.476987519Z" level=info msg="RemoveContainer for \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\" returns successfully" Feb 12 19:46:39.483918 kubelet[1990]: I0212 19:46:39.483882 1990 scope.go:115] "RemoveContainer" containerID="c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617" Feb 12 19:46:39.486763 env[1106]: time="2024-02-12T19:46:39.486276356Z" level=info msg="RemoveContainer for \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\"" Feb 12 19:46:39.500252 env[1106]: time="2024-02-12T19:46:39.500162935Z" level=info msg="RemoveContainer for \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\" returns successfully" Feb 12 19:46:39.500658 kubelet[1990]: I0212 19:46:39.500607 1990 scope.go:115] "RemoveContainer" containerID="993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127" Feb 12 19:46:39.507595 env[1106]: time="2024-02-12T19:46:39.507534852Z" level=info msg="RemoveContainer for \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\"" Feb 12 19:46:39.528747 env[1106]: time="2024-02-12T19:46:39.528654310Z" level=info msg="RemoveContainer for \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\" returns successfully" Feb 12 19:46:39.530010 kubelet[1990]: I0212 19:46:39.529973 1990 scope.go:115] "RemoveContainer" containerID="29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53" Feb 12 19:46:39.534505 env[1106]: time="2024-02-12T19:46:39.534400384Z" level=info msg="RemoveContainer for \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\"" Feb 12 19:46:39.542051 env[1106]: time="2024-02-12T19:46:39.541982204Z" level=info msg="RemoveContainer for \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\" returns successfully" Feb 12 19:46:39.542650 kubelet[1990]: I0212 19:46:39.542612 1990 scope.go:115] "RemoveContainer" containerID="6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652" Feb 12 19:46:39.543715 env[1106]: time="2024-02-12T19:46:39.543535373Z" level=error msg="ContainerStatus for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": not found" Feb 12 19:46:39.546314 kubelet[1990]: E0212 19:46:39.546256 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": not found" containerID="6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652" Feb 12 19:46:39.547446 kubelet[1990]: I0212 19:46:39.547392 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652} err="failed to get container status \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": not found" Feb 12 19:46:39.547997 kubelet[1990]: I0212 19:46:39.547963 1990 scope.go:115] "RemoveContainer" containerID="5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4" Feb 12 19:46:39.548757 env[1106]: time="2024-02-12T19:46:39.548621769Z" level=error msg="ContainerStatus for \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\": not found" Feb 12 19:46:39.549798 kubelet[1990]: E0212 19:46:39.549759 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\": not found" containerID="5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4" Feb 12 19:46:39.550081 kubelet[1990]: I0212 19:46:39.550061 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4} err="failed to get container status \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f92747e0401112784bca493463be9fb3b410f16bb892643a57332d24b2ad3c4\": not found" Feb 12 19:46:39.550206 kubelet[1990]: I0212 19:46:39.550189 1990 scope.go:115] "RemoveContainer" containerID="c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617" Feb 12 19:46:39.552334 env[1106]: time="2024-02-12T19:46:39.552202986Z" level=error msg="ContainerStatus for \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\": not found" Feb 12 19:46:39.554264 kubelet[1990]: E0212 19:46:39.554217 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\": not found" containerID="c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617" Feb 12 19:46:39.554708 kubelet[1990]: I0212 19:46:39.554288 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617} err="failed to get container status \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1f471e68ec043f30736dbdaab4639a75105d8df65b501d9004561582bc39617\": not found" Feb 12 19:46:39.554708 kubelet[1990]: I0212 19:46:39.554312 1990 scope.go:115] "RemoveContainer" containerID="993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127" Feb 12 19:46:39.555129 env[1106]: time="2024-02-12T19:46:39.554810582Z" level=error msg="ContainerStatus for \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\": not found" Feb 12 19:46:39.555320 kubelet[1990]: E0212 19:46:39.555294 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\": not found" containerID="993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127" Feb 12 19:46:39.555677 kubelet[1990]: I0212 19:46:39.555359 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127} err="failed to get container status \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\": rpc error: code = NotFound desc = an error occurred when try to find container \"993d47d10a2a118921153bc4092c4753abd15606aabcc76588e416474b20a127\": not found" Feb 12 19:46:39.555677 kubelet[1990]: I0212 19:46:39.555381 1990 scope.go:115] "RemoveContainer" containerID="29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53" Feb 12 19:46:39.557300 kubelet[1990]: E0212 19:46:39.556149 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\": not found" containerID="29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53" Feb 12 19:46:39.557300 kubelet[1990]: I0212 19:46:39.556204 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53} err="failed to get container status \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\": rpc error: code = NotFound desc = an error occurred when try to find container \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\": not found" Feb 12 19:46:39.557300 kubelet[1990]: I0212 19:46:39.556227 1990 scope.go:115] "RemoveContainer" containerID="74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f" Feb 12 19:46:39.557572 env[1106]: time="2024-02-12T19:46:39.555875617Z" level=error msg="ContainerStatus for \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29bb697fbc169d986ba15a1e975ec5f06124c0632bb240acee43b3f631cb6b53\": not found" Feb 12 19:46:39.562776 env[1106]: time="2024-02-12T19:46:39.562292127Z" level=info msg="RemoveContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\"" Feb 12 19:46:39.574817 env[1106]: time="2024-02-12T19:46:39.574639552Z" level=info msg="RemoveContainer for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" returns successfully" Feb 12 19:46:39.575530 kubelet[1990]: I0212 19:46:39.575488 1990 scope.go:115] "RemoveContainer" containerID="74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f" Feb 12 19:46:39.576373 env[1106]: time="2024-02-12T19:46:39.576185022Z" level=error msg="ContainerStatus for \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\": not found" Feb 12 19:46:39.576834 kubelet[1990]: E0212 19:46:39.576803 1990 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\": not found" containerID="74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f" Feb 12 19:46:39.577324 kubelet[1990]: I0212 19:46:39.577296 1990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f} err="failed to get container status \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\": rpc error: code = NotFound desc = an error occurred when try to find container \"74867cd91da7e68f2770a8ce370358368cbd1e82053d3c247572e2f3f152036f\": not found" Feb 12 19:46:39.584060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9-rootfs.mount: Deactivated successfully. Feb 12 19:46:39.584211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9-shm.mount: Deactivated successfully. Feb 12 19:46:39.584317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b-rootfs.mount: Deactivated successfully. Feb 12 19:46:39.584401 systemd[1]: var-lib-kubelet-pods-8e19b2fb\x2d6e86\x2d46da\x2d8e73\x2dec2c727ab706-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7w8l8.mount: Deactivated successfully. Feb 12 19:46:39.584483 systemd[1]: var-lib-kubelet-pods-fdb0c430\x2d20af\x2d475d\x2d8ff7\x2db47df0e68ff4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmv97b.mount: Deactivated successfully. Feb 12 19:46:39.584572 systemd[1]: var-lib-kubelet-pods-fdb0c430\x2d20af\x2d475d\x2d8ff7\x2db47df0e68ff4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:46:39.584646 systemd[1]: var-lib-kubelet-pods-fdb0c430\x2d20af\x2d475d\x2d8ff7\x2db47df0e68ff4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:46:39.703247 kubelet[1990]: E0212 19:46:39.702674 1990 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:46:39.766079 kubelet[1990]: E0212 19:46:39.766030 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:40.334386 sshd[3746]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:40.349414 systemd[1]: Started sshd@27-64.23.173.239:22-139.178.68.195:54754.service. Feb 12 19:46:40.350698 systemd[1]: sshd@26-64.23.173.239:22-139.178.68.195:36454.service: Deactivated successfully. Feb 12 19:46:40.354363 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:46:40.354684 systemd[1]: session-27.scope: Consumed 1.414s CPU time. Feb 12 19:46:40.356529 systemd-logind[1097]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:46:40.359221 systemd-logind[1097]: Removed session 27. Feb 12 19:46:40.434138 sshd[3912]: Accepted publickey for core from 139.178.68.195 port 54754 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:40.437853 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:40.451371 systemd[1]: Started session-28.scope. Feb 12 19:46:40.452603 systemd-logind[1097]: New session 28 of user core. Feb 12 19:46:40.767025 env[1106]: time="2024-02-12T19:46:40.766886801Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:46:40.767705 env[1106]: time="2024-02-12T19:46:40.767044984Z" level=info msg="TearDown network for sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" successfully" Feb 12 19:46:40.767705 env[1106]: time="2024-02-12T19:46:40.767120645Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" returns successfully" Feb 12 19:46:40.767705 env[1106]: time="2024-02-12T19:46:40.767375098Z" level=info msg="StopContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" with timeout 1 (s)" Feb 12 19:46:40.767705 env[1106]: time="2024-02-12T19:46:40.767415706Z" level=error msg="StopContainer for \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": not found" Feb 12 19:46:40.767978 kubelet[1990]: E0212 19:46:40.767712 1990 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652\": not found" containerID="6f26b3dfa2cdee7dc2a4c906060ac159bd058d5ce66cf3d73a7fe53b933bf652" Feb 12 19:46:40.768693 env[1106]: time="2024-02-12T19:46:40.768658148Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:46:40.768866 env[1106]: time="2024-02-12T19:46:40.768782462Z" level=info msg="TearDown network for sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" successfully" Feb 12 19:46:40.768866 env[1106]: time="2024-02-12T19:46:40.768814376Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" returns successfully" Feb 12 19:46:40.770392 kubelet[1990]: I0212 19:46:40.770306 1990 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8e19b2fb-6e86-46da-8e73-ec2c727ab706 path="/var/lib/kubelet/pods/8e19b2fb-6e86-46da-8e73-ec2c727ab706/volumes" Feb 12 19:46:40.772948 kubelet[1990]: I0212 19:46:40.772903 1990 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fdb0c430-20af-475d-8ff7-b47df0e68ff4 path="/var/lib/kubelet/pods/fdb0c430-20af-475d-8ff7-b47df0e68ff4/volumes" Feb 12 19:46:41.906167 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:41.913458 systemd[1]: sshd@27-64.23.173.239:22-139.178.68.195:54754.service: Deactivated successfully. Feb 12 19:46:41.916502 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 19:46:41.916985 systemd[1]: session-28.scope: Consumed 1.103s CPU time. Feb 12 19:46:41.920424 systemd-logind[1097]: Session 28 logged out. Waiting for processes to exit. Feb 12 19:46:41.926899 systemd[1]: Started sshd@28-64.23.173.239:22-139.178.68.195:54770.service. Feb 12 19:46:41.934433 systemd-logind[1097]: Removed session 28. Feb 12 19:46:41.988089 sshd[3926]: Accepted publickey for core from 139.178.68.195 port 54770 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:41.991110 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:42.003217 systemd[1]: Started session-29.scope. Feb 12 19:46:42.004840 systemd-logind[1097]: New session 29 of user core. Feb 12 19:46:42.033181 kubelet[1990]: I0212 19:46:42.032947 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034180 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="mount-cgroup" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034238 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="mount-bpf-fs" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034266 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="cilium-agent" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034279 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="clean-cilium-state" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034292 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e19b2fb-6e86-46da-8e73-ec2c727ab706" containerName="cilium-operator" Feb 12 19:46:42.034824 kubelet[1990]: E0212 19:46:42.034303 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="apply-sysctl-overwrites" Feb 12 19:46:42.034824 kubelet[1990]: I0212 19:46:42.034432 1990 memory_manager.go:346] "RemoveStaleState removing state" podUID="8e19b2fb-6e86-46da-8e73-ec2c727ab706" containerName="cilium-operator" Feb 12 19:46:42.034824 kubelet[1990]: I0212 19:46:42.034446 1990 memory_manager.go:346] "RemoveStaleState removing state" podUID="fdb0c430-20af-475d-8ff7-b47df0e68ff4" containerName="cilium-agent" Feb 12 19:46:42.045687 systemd[1]: Created slice kubepods-burstable-pod4905ee89_d200_4321_bfe4_bd9b404efac2.slice. Feb 12 19:46:42.060654 kubelet[1990]: W0212 19:46:42.060605 1990 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.061064 kubelet[1990]: W0212 19:46:42.061042 1990 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.062461 kubelet[1990]: E0212 19:46:42.062410 1990 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.062819 kubelet[1990]: W0212 19:46:42.062798 1990 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.063038 kubelet[1990]: E0212 19:46:42.063002 1990 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.063297 kubelet[1990]: W0212 19:46:42.063280 1990 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.063442 kubelet[1990]: E0212 19:46:42.063429 1990 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.064284 kubelet[1990]: E0212 19:46:42.064258 1990 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-4-a1ae76f648" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-4-a1ae76f648' and this object Feb 12 19:46:42.078329 kubelet[1990]: I0212 19:46:42.078268 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-clustermesh-secrets\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.078663 kubelet[1990]: I0212 19:46:42.078642 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-hostproc\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.078909 kubelet[1990]: I0212 19:46:42.078881 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-net\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.079171 kubelet[1990]: I0212 19:46:42.079116 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-xtables-lock\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.079353 kubelet[1990]: I0212 19:46:42.079332 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-cgroup\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.079545 kubelet[1990]: I0212 19:46:42.079531 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.079705 kubelet[1990]: I0212 19:46:42.079691 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-ipsec-secrets\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.079901 kubelet[1990]: I0212 19:46:42.079862 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cni-path\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080059 kubelet[1990]: I0212 19:46:42.080032 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8m6h\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-kube-api-access-b8m6h\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080213 kubelet[1990]: I0212 19:46:42.080191 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-etc-cni-netd\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080363 kubelet[1990]: I0212 19:46:42.080351 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-lib-modules\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080546 kubelet[1990]: I0212 19:46:42.080526 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-kernel\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080701 kubelet[1990]: I0212 19:46:42.080687 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-hubble-tls\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.080924 kubelet[1990]: I0212 19:46:42.080910 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-run\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.081073 kubelet[1990]: I0212 19:46:42.081058 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-bpf-maps\") pod \"cilium-rxcvs\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " pod="kube-system/cilium-rxcvs" Feb 12 19:46:42.337854 sshd[3926]: pam_unix(sshd:session): session closed for user core Feb 12 19:46:42.347686 systemd[1]: Started sshd@29-64.23.173.239:22-139.178.68.195:54786.service. Feb 12 19:46:42.348926 systemd[1]: sshd@28-64.23.173.239:22-139.178.68.195:54770.service: Deactivated successfully. Feb 12 19:46:42.357289 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 19:46:42.359571 systemd-logind[1097]: Session 29 logged out. Waiting for processes to exit. Feb 12 19:46:42.362251 systemd-logind[1097]: Removed session 29. Feb 12 19:46:42.419195 sshd[3937]: Accepted publickey for core from 139.178.68.195 port 54786 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:46:42.423135 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:46:42.430967 systemd[1]: Started session-30.scope. Feb 12 19:46:42.431888 systemd-logind[1097]: New session 30 of user core. Feb 12 19:46:43.184258 kubelet[1990]: E0212 19:46:43.184163 1990 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:43.185722 kubelet[1990]: E0212 19:46:43.184354 1990 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path podName:4905ee89-d200-4321-bfe4-bd9b404efac2 nodeName:}" failed. No retries permitted until 2024-02-12 19:46:43.684306056 +0000 UTC m=+169.562978837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path") pod "cilium-rxcvs" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:43.855561 kubelet[1990]: E0212 19:46:43.855501 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:43.856358 env[1106]: time="2024-02-12T19:46:43.856300314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxcvs,Uid:4905ee89-d200-4321-bfe4-bd9b404efac2,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:43.897351 env[1106]: time="2024-02-12T19:46:43.896991439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:43.897351 env[1106]: time="2024-02-12T19:46:43.897213696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:43.897351 env[1106]: time="2024-02-12T19:46:43.897269796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:43.898123 env[1106]: time="2024-02-12T19:46:43.898038939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527 pid=3956 runtime=io.containerd.runc.v2 Feb 12 19:46:43.939387 systemd[1]: Started cri-containerd-1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527.scope. Feb 12 19:46:43.959268 systemd[1]: run-containerd-runc-k8s.io-1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527-runc.8AU5Lt.mount: Deactivated successfully. Feb 12 19:46:44.035566 env[1106]: time="2024-02-12T19:46:44.035469598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxcvs,Uid:4905ee89-d200-4321-bfe4-bd9b404efac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\"" Feb 12 19:46:44.038616 kubelet[1990]: E0212 19:46:44.038255 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:44.043936 env[1106]: time="2024-02-12T19:46:44.043866563Z" level=info msg="CreateContainer within sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:46:44.091467 env[1106]: time="2024-02-12T19:46:44.091382808Z" level=info msg="CreateContainer within sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\"" Feb 12 19:46:44.109596 env[1106]: time="2024-02-12T19:46:44.093513464Z" level=info msg="StartContainer for \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\"" Feb 12 19:46:44.189256 systemd[1]: Started cri-containerd-bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138.scope. Feb 12 19:46:44.226046 systemd[1]: cri-containerd-bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138.scope: Deactivated successfully. Feb 12 19:46:44.292495 env[1106]: time="2024-02-12T19:46:44.292378792Z" level=info msg="shim disconnected" id=bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138 Feb 12 19:46:44.293417 env[1106]: time="2024-02-12T19:46:44.293098494Z" level=warning msg="cleaning up after shim disconnected" id=bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138 namespace=k8s.io Feb 12 19:46:44.293665 env[1106]: time="2024-02-12T19:46:44.293626627Z" level=info msg="cleaning up dead shim" Feb 12 19:46:44.317902 env[1106]: time="2024-02-12T19:46:44.317828990Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4014 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:46:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:46:44.318664 env[1106]: time="2024-02-12T19:46:44.318471257Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Feb 12 19:46:44.319022 env[1106]: time="2024-02-12T19:46:44.318967895Z" level=error msg="Failed to pipe stdout of container \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\"" error="reading from a closed fifo" Feb 12 19:46:44.319202 env[1106]: time="2024-02-12T19:46:44.319162434Z" level=error msg="Failed to pipe stderr of container \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\"" error="reading from a closed fifo" Feb 12 19:46:44.326889 env[1106]: time="2024-02-12T19:46:44.326758259Z" level=error msg="StartContainer for \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:46:44.327232 kubelet[1990]: E0212 19:46:44.327175 1990 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138" Feb 12 19:46:44.328994 kubelet[1990]: E0212 19:46:44.328052 1990 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:46:44.328994 kubelet[1990]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:46:44.328994 kubelet[1990]: rm /hostbin/cilium-mount Feb 12 19:46:44.328994 kubelet[1990]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b8m6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rxcvs_kube-system(4905ee89-d200-4321-bfe4-bd9b404efac2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:46:44.330218 kubelet[1990]: E0212 19:46:44.328153 1990 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rxcvs" podUID=4905ee89-d200-4321-bfe4-bd9b404efac2 Feb 12 19:46:44.491720 env[1106]: time="2024-02-12T19:46:44.491642909Z" level=info msg="StopPodSandbox for \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\"" Feb 12 19:46:44.492113 env[1106]: time="2024-02-12T19:46:44.492065415Z" level=info msg="Container to stop \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:46:44.505825 systemd[1]: cri-containerd-1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527.scope: Deactivated successfully. Feb 12 19:46:44.572766 env[1106]: time="2024-02-12T19:46:44.572679329Z" level=info msg="shim disconnected" id=1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527 Feb 12 19:46:44.573560 env[1106]: time="2024-02-12T19:46:44.573506124Z" level=warning msg="cleaning up after shim disconnected" id=1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527 namespace=k8s.io Feb 12 19:46:44.573854 env[1106]: time="2024-02-12T19:46:44.573820939Z" level=info msg="cleaning up dead shim" Feb 12 19:46:44.593837 env[1106]: time="2024-02-12T19:46:44.593716381Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" Feb 12 19:46:44.594809 env[1106]: time="2024-02-12T19:46:44.594692437Z" level=info msg="TearDown network for sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" successfully" Feb 12 19:46:44.595083 env[1106]: time="2024-02-12T19:46:44.595039321Z" level=info msg="StopPodSandbox for \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" returns successfully" Feb 12 19:46:44.705648 kubelet[1990]: E0212 19:46:44.705603 1990 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:46:44.723940 kubelet[1990]: I0212 19:46:44.723883 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-lib-modules\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.724359 kubelet[1990]: I0212 19:46:44.723955 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.724650 kubelet[1990]: I0212 19:46:44.724621 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-etc-cni-netd\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.724881 kubelet[1990]: I0212 19:46:44.724702 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.724992 kubelet[1990]: I0212 19:46:44.724860 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-hubble-tls\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.724992 kubelet[1990]: I0212 19:46:44.724958 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-cgroup\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725147 kubelet[1990]: I0212 19:46:44.725010 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725147 kubelet[1990]: I0212 19:46:44.725034 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cni-path\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725147 kubelet[1990]: I0212 19:46:44.725051 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-run\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725147 kubelet[1990]: I0212 19:46:44.725067 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-bpf-maps\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725147 kubelet[1990]: I0212 19:46:44.725099 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-hostproc\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725443 kubelet[1990]: I0212 19:46:44.725381 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-ipsec-secrets\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725443 kubelet[1990]: I0212 19:46:44.725425 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-kernel\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725595 kubelet[1990]: I0212 19:46:44.725455 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-xtables-lock\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725595 kubelet[1990]: I0212 19:46:44.725477 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-clustermesh-secrets\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725595 kubelet[1990]: I0212 19:46:44.725494 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-net\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725595 kubelet[1990]: I0212 19:46:44.725520 1990 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8m6h\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-kube-api-access-b8m6h\") pod \"4905ee89-d200-4321-bfe4-bd9b404efac2\" (UID: \"4905ee89-d200-4321-bfe4-bd9b404efac2\") " Feb 12 19:46:44.725595 kubelet[1990]: I0212 19:46:44.725593 1990 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-lib-modules\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.725834 kubelet[1990]: I0212 19:46:44.725610 1990 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-etc-cni-netd\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.728185 kubelet[1990]: I0212 19:46:44.728119 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.728496 kubelet[1990]: W0212 19:46:44.728439 1990 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4905ee89-d200-4321-bfe4-bd9b404efac2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:46:44.728758 kubelet[1990]: I0212 19:46:44.728226 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-hostproc" (OuterVolumeSpecName: "hostproc") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.733523 kubelet[1990]: I0212 19:46:44.733451 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cni-path" (OuterVolumeSpecName: "cni-path") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.733910 kubelet[1990]: I0212 19:46:44.733849 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.734125 kubelet[1990]: I0212 19:46:44.734098 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.734289 kubelet[1990]: I0212 19:46:44.734268 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.734442 kubelet[1990]: I0212 19:46:44.734419 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.736202 kubelet[1990]: I0212 19:46:44.735547 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:46:44.737595 kubelet[1990]: I0212 19:46:44.737548 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:46:44.753928 kubelet[1990]: I0212 19:46:44.752145 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:46:44.754471 kubelet[1990]: I0212 19:46:44.754405 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-kube-api-access-b8m6h" (OuterVolumeSpecName: "kube-api-access-b8m6h") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "kube-api-access-b8m6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:46:44.755096 kubelet[1990]: I0212 19:46:44.754972 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:46:44.763579 kubelet[1990]: I0212 19:46:44.763516 1990 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4905ee89-d200-4321-bfe4-bd9b404efac2" (UID: "4905ee89-d200-4321-bfe4-bd9b404efac2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:46:44.775935 systemd[1]: Removed slice kubepods-burstable-pod4905ee89_d200_4321_bfe4_bd9b404efac2.slice. Feb 12 19:46:44.826783 kubelet[1990]: I0212 19:46:44.826663 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-config-path\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.826783 kubelet[1990]: I0212 19:46:44.826783 1990 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cni-path\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826808 1990 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-hubble-tls\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826824 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-cgroup\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826855 1990 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-hostproc\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826876 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-ipsec-secrets\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826890 1990 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-cilium-run\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826904 1990 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-bpf-maps\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826917 1990 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-xtables-lock\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827208 kubelet[1990]: I0212 19:46:44.826945 1990 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-kernel\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827571 kubelet[1990]: I0212 19:46:44.826960 1990 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4905ee89-d200-4321-bfe4-bd9b404efac2-clustermesh-secrets\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827571 kubelet[1990]: I0212 19:46:44.826977 1990 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-b8m6h\" (UniqueName: \"kubernetes.io/projected/4905ee89-d200-4321-bfe4-bd9b404efac2-kube-api-access-b8m6h\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.827571 kubelet[1990]: I0212 19:46:44.826992 1990 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4905ee89-d200-4321-bfe4-bd9b404efac2-host-proc-sys-net\") on node \"ci-3510.3.2-4-a1ae76f648\" DevicePath \"\"" Feb 12 19:46:44.874482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527-rootfs.mount: Deactivated successfully. Feb 12 19:46:44.874654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527-shm.mount: Deactivated successfully. Feb 12 19:46:44.874802 systemd[1]: var-lib-kubelet-pods-4905ee89\x2dd200\x2d4321\x2dbfe4\x2dbd9b404efac2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:46:44.874890 systemd[1]: var-lib-kubelet-pods-4905ee89\x2dd200\x2d4321\x2dbfe4\x2dbd9b404efac2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:46:44.874998 systemd[1]: var-lib-kubelet-pods-4905ee89\x2dd200\x2d4321\x2dbfe4\x2dbd9b404efac2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:46:44.875201 systemd[1]: var-lib-kubelet-pods-4905ee89\x2dd200\x2d4321\x2dbfe4\x2dbd9b404efac2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8m6h.mount: Deactivated successfully. Feb 12 19:46:45.488484 kubelet[1990]: I0212 19:46:45.488448 1990 scope.go:115] "RemoveContainer" containerID="bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138" Feb 12 19:46:45.491812 env[1106]: time="2024-02-12T19:46:45.490992521Z" level=info msg="RemoveContainer for \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\"" Feb 12 19:46:45.502414 env[1106]: time="2024-02-12T19:46:45.502307706Z" level=info msg="RemoveContainer for \"bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138\" returns successfully" Feb 12 19:46:45.555535 kubelet[1990]: I0212 19:46:45.555474 1990 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:46:45.555823 kubelet[1990]: E0212 19:46:45.555572 1990 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4905ee89-d200-4321-bfe4-bd9b404efac2" containerName="mount-cgroup" Feb 12 19:46:45.555823 kubelet[1990]: I0212 19:46:45.555628 1990 memory_manager.go:346] "RemoveStaleState removing state" podUID="4905ee89-d200-4321-bfe4-bd9b404efac2" containerName="mount-cgroup" Feb 12 19:46:45.564858 systemd[1]: Created slice kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice. Feb 12 19:46:45.634530 kubelet[1990]: I0212 19:46:45.634479 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-cni-path\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.634937 kubelet[1990]: I0212 19:46:45.634913 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-etc-cni-netd\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.635130 kubelet[1990]: I0212 19:46:45.635112 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-xtables-lock\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.635268 kubelet[1990]: I0212 19:46:45.635254 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7hl\" (UniqueName: \"kubernetes.io/projected/e210b989-87e3-4106-bd4b-11e7eabcf2d3-kube-api-access-wm7hl\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.635438 kubelet[1990]: I0212 19:46:45.635421 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e210b989-87e3-4106-bd4b-11e7eabcf2d3-cilium-ipsec-secrets\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.635656 kubelet[1990]: I0212 19:46:45.635639 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-host-proc-sys-net\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.635983 kubelet[1990]: I0212 19:46:45.635942 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-host-proc-sys-kernel\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636128 kubelet[1990]: I0212 19:46:45.636011 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e210b989-87e3-4106-bd4b-11e7eabcf2d3-hubble-tls\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636128 kubelet[1990]: I0212 19:46:45.636049 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e210b989-87e3-4106-bd4b-11e7eabcf2d3-cilium-config-path\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636128 kubelet[1990]: I0212 19:46:45.636078 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-cilium-run\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636128 kubelet[1990]: I0212 19:46:45.636110 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-bpf-maps\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636292 kubelet[1990]: I0212 19:46:45.636145 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-cilium-cgroup\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636292 kubelet[1990]: I0212 19:46:45.636182 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-hostproc\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636292 kubelet[1990]: I0212 19:46:45.636216 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e210b989-87e3-4106-bd4b-11e7eabcf2d3-lib-modules\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.636292 kubelet[1990]: I0212 19:46:45.636247 1990 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e210b989-87e3-4106-bd4b-11e7eabcf2d3-clustermesh-secrets\") pod \"cilium-hfbr4\" (UID: \"e210b989-87e3-4106-bd4b-11e7eabcf2d3\") " pod="kube-system/cilium-hfbr4" Feb 12 19:46:45.869343 kubelet[1990]: E0212 19:46:45.868978 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:45.870906 env[1106]: time="2024-02-12T19:46:45.870358532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hfbr4,Uid:e210b989-87e3-4106-bd4b-11e7eabcf2d3,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:45.914302 env[1106]: time="2024-02-12T19:46:45.914163548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:45.914677 env[1106]: time="2024-02-12T19:46:45.914587845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:45.914677 env[1106]: time="2024-02-12T19:46:45.914632567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:45.915331 env[1106]: time="2024-02-12T19:46:45.915260083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9 pid=4076 runtime=io.containerd.runc.v2 Feb 12 19:46:45.949420 systemd[1]: Started cri-containerd-13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9.scope. Feb 12 19:46:46.021278 env[1106]: time="2024-02-12T19:46:46.018983523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hfbr4,Uid:e210b989-87e3-4106-bd4b-11e7eabcf2d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\"" Feb 12 19:46:46.021499 kubelet[1990]: E0212 19:46:46.020251 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:46.025081 env[1106]: time="2024-02-12T19:46:46.025023559Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:46:46.053017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913542120.mount: Deactivated successfully. Feb 12 19:46:46.081958 env[1106]: time="2024-02-12T19:46:46.081790651Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89\"" Feb 12 19:46:46.084034 env[1106]: time="2024-02-12T19:46:46.083972134Z" level=info msg="StartContainer for \"9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89\"" Feb 12 19:46:46.123351 systemd[1]: Started cri-containerd-9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89.scope. Feb 12 19:46:46.229399 env[1106]: time="2024-02-12T19:46:46.229313495Z" level=info msg="StartContainer for \"9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89\" returns successfully" Feb 12 19:46:46.288454 systemd[1]: cri-containerd-9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89.scope: Deactivated successfully. Feb 12 19:46:46.348926 env[1106]: time="2024-02-12T19:46:46.348689827Z" level=info msg="shim disconnected" id=9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89 Feb 12 19:46:46.348926 env[1106]: time="2024-02-12T19:46:46.348916724Z" level=warning msg="cleaning up after shim disconnected" id=9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89 namespace=k8s.io Feb 12 19:46:46.348926 env[1106]: time="2024-02-12T19:46:46.348931357Z" level=info msg="cleaning up dead shim" Feb 12 19:46:46.366123 env[1106]: time="2024-02-12T19:46:46.366024244Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n" Feb 12 19:46:46.494551 kubelet[1990]: E0212 19:46:46.494499 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:46.503892 env[1106]: time="2024-02-12T19:46:46.503816112Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:46:46.527071 env[1106]: time="2024-02-12T19:46:46.527002226Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2\"" Feb 12 19:46:46.528366 env[1106]: time="2024-02-12T19:46:46.528304681Z" level=info msg="StartContainer for \"4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2\"" Feb 12 19:46:46.579560 systemd[1]: Started cri-containerd-4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2.scope. Feb 12 19:46:46.648818 env[1106]: time="2024-02-12T19:46:46.648288478Z" level=info msg="StartContainer for \"4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2\" returns successfully" Feb 12 19:46:46.660953 systemd[1]: cri-containerd-4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2.scope: Deactivated successfully. Feb 12 19:46:46.710274 env[1106]: time="2024-02-12T19:46:46.710212298Z" level=info msg="shim disconnected" id=4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2 Feb 12 19:46:46.710665 env[1106]: time="2024-02-12T19:46:46.710625104Z" level=warning msg="cleaning up after shim disconnected" id=4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2 namespace=k8s.io Feb 12 19:46:46.711348 env[1106]: time="2024-02-12T19:46:46.710861733Z" level=info msg="cleaning up dead shim" Feb 12 19:46:46.727568 env[1106]: time="2024-02-12T19:46:46.727452044Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4223 runtime=io.containerd.runc.v2\n" Feb 12 19:46:46.784269 kubelet[1990]: I0212 19:46:46.784019 1990 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4905ee89-d200-4321-bfe4-bd9b404efac2 path="/var/lib/kubelet/pods/4905ee89-d200-4321-bfe4-bd9b404efac2/volumes" Feb 12 19:46:47.404981 kubelet[1990]: W0212 19:46:47.404874 1990 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4905ee89_d200_4321_bfe4_bd9b404efac2.slice/cri-containerd-bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138.scope WatchSource:0}: container "bff3eeabfca0ad605822532e6de9334cd0046214338f3faca39f759f09caa138" in namespace "k8s.io": not found Feb 12 19:46:47.502884 kubelet[1990]: E0212 19:46:47.502786 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:47.507078 env[1106]: time="2024-02-12T19:46:47.506951718Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:46:47.550638 env[1106]: time="2024-02-12T19:46:47.550527696Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd\"" Feb 12 19:46:47.555725 env[1106]: time="2024-02-12T19:46:47.555669108Z" level=info msg="StartContainer for \"4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd\"" Feb 12 19:46:47.611949 systemd[1]: Started cri-containerd-4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd.scope. Feb 12 19:46:47.668575 env[1106]: time="2024-02-12T19:46:47.668432922Z" level=info msg="StartContainer for \"4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd\" returns successfully" Feb 12 19:46:47.677706 systemd[1]: cri-containerd-4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd.scope: Deactivated successfully. Feb 12 19:46:47.726276 env[1106]: time="2024-02-12T19:46:47.726214886Z" level=info msg="shim disconnected" id=4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd Feb 12 19:46:47.726707 env[1106]: time="2024-02-12T19:46:47.726282999Z" level=warning msg="cleaning up after shim disconnected" id=4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd namespace=k8s.io Feb 12 19:46:47.726707 env[1106]: time="2024-02-12T19:46:47.726299322Z" level=info msg="cleaning up dead shim" Feb 12 19:46:47.741803 env[1106]: time="2024-02-12T19:46:47.741388214Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4282 runtime=io.containerd.runc.v2\n" Feb 12 19:46:47.886432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd-rootfs.mount: Deactivated successfully. Feb 12 19:46:48.005791 kubelet[1990]: I0212 19:46:48.005754 1990 setters.go:548] "Node became not ready" node="ci-3510.3.2-4-a1ae76f648" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:46:48.005661022 +0000 UTC m=+173.884333791 LastTransitionTime:2024-02-12 19:46:48.005661022 +0000 UTC m=+173.884333791 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:46:48.509944 kubelet[1990]: E0212 19:46:48.509909 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:48.513521 env[1106]: time="2024-02-12T19:46:48.513460344Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:46:48.542168 env[1106]: time="2024-02-12T19:46:48.542076323Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c\"" Feb 12 19:46:48.543006 env[1106]: time="2024-02-12T19:46:48.542956136Z" level=info msg="StartContainer for \"99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c\"" Feb 12 19:46:48.582271 systemd[1]: Started cri-containerd-99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c.scope. Feb 12 19:46:48.628514 systemd[1]: cri-containerd-99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c.scope: Deactivated successfully. Feb 12 19:46:48.633013 env[1106]: time="2024-02-12T19:46:48.632873571Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice/cri-containerd-99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c.scope/memory.events\": no such file or directory" Feb 12 19:46:48.634822 env[1106]: time="2024-02-12T19:46:48.634748953Z" level=info msg="StartContainer for \"99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c\" returns successfully" Feb 12 19:46:48.668024 env[1106]: time="2024-02-12T19:46:48.667959447Z" level=info msg="shim disconnected" id=99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c Feb 12 19:46:48.668024 env[1106]: time="2024-02-12T19:46:48.668023167Z" level=warning msg="cleaning up after shim disconnected" id=99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c namespace=k8s.io Feb 12 19:46:48.668024 env[1106]: time="2024-02-12T19:46:48.668037117Z" level=info msg="cleaning up dead shim" Feb 12 19:46:48.683248 env[1106]: time="2024-02-12T19:46:48.683156071Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4339 runtime=io.containerd.runc.v2\n" Feb 12 19:46:48.886797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c-rootfs.mount: Deactivated successfully. Feb 12 19:46:49.516132 kubelet[1990]: E0212 19:46:49.516100 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:49.522769 env[1106]: time="2024-02-12T19:46:49.522567924Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:46:49.547860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726757693.mount: Deactivated successfully. Feb 12 19:46:49.551883 env[1106]: time="2024-02-12T19:46:49.551833039Z" level=info msg="CreateContainer within sandbox \"13fa62916ff53ff5dc3023254d2be9a9ad26e86914e99e4d3beda93587b0e1c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56\"" Feb 12 19:46:49.553495 env[1106]: time="2024-02-12T19:46:49.553442070Z" level=info msg="StartContainer for \"49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56\"" Feb 12 19:46:49.583442 systemd[1]: Started cri-containerd-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56.scope. Feb 12 19:46:49.647204 env[1106]: time="2024-02-12T19:46:49.647139005Z" level=info msg="StartContainer for \"49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56\" returns successfully" Feb 12 19:46:49.707196 kubelet[1990]: E0212 19:46:49.707166 1990 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:46:50.485810 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:46:50.532044 kubelet[1990]: E0212 19:46:50.528700 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:50.553833 kubelet[1990]: W0212 19:46:50.553764 1990 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice/cri-containerd-9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89.scope WatchSource:0}: task 9679f0c531679a31c09df2a17afac60d6556e6107109657e6bb88437565d5f89 not found: not found Feb 12 19:46:50.569340 kubelet[1990]: I0212 19:46:50.569281 1990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hfbr4" podStartSLOduration=5.568224817 pod.CreationTimestamp="2024-02-12 19:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:50.565704444 +0000 UTC m=+176.444377226" watchObservedRunningTime="2024-02-12 19:46:50.568224817 +0000 UTC m=+176.446897638" Feb 12 19:46:51.008627 systemd[1]: run-containerd-runc-k8s.io-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56-runc.tpjM8m.mount: Deactivated successfully. Feb 12 19:46:51.532991 kubelet[1990]: E0212 19:46:51.532951 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:52.534357 kubelet[1990]: E0212 19:46:52.534323 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:53.241331 systemd[1]: run-containerd-runc-k8s.io-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56-runc.X522fK.mount: Deactivated successfully. Feb 12 19:46:53.671148 kubelet[1990]: W0212 19:46:53.670966 1990 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice/cri-containerd-4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2.scope WatchSource:0}: task 4528bda4a4f8eb6f77c63657d8695d1b7633734865b419ac7413c7baa94289f2 not found: not found Feb 12 19:46:53.891375 kubelet[1990]: E0212 19:46:53.891317 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:53.936378 systemd-networkd[1004]: lxc_health: Link UP Feb 12 19:46:53.945038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:46:53.945397 systemd-networkd[1004]: lxc_health: Gained carrier Feb 12 19:46:54.408022 env[1106]: time="2024-02-12T19:46:54.407971879Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:46:54.408645 env[1106]: time="2024-02-12T19:46:54.408588489Z" level=info msg="TearDown network for sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" successfully" Feb 12 19:46:54.408777 env[1106]: time="2024-02-12T19:46:54.408759310Z" level=info msg="StopPodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" returns successfully" Feb 12 19:46:54.409415 env[1106]: time="2024-02-12T19:46:54.409378002Z" level=info msg="RemovePodSandbox for \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:46:54.409694 env[1106]: time="2024-02-12T19:46:54.409651477Z" level=info msg="Forcibly stopping sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\"" Feb 12 19:46:54.409890 env[1106]: time="2024-02-12T19:46:54.409871356Z" level=info msg="TearDown network for sandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" successfully" Feb 12 19:46:54.429058 env[1106]: time="2024-02-12T19:46:54.429001153Z" level=info msg="RemovePodSandbox \"b4bbc92d4aa488c0824a09c5ca044950d0e839d6fd093215ce7d06fe42886f3b\" returns successfully" Feb 12 19:46:54.430252 env[1106]: time="2024-02-12T19:46:54.430204046Z" level=info msg="StopPodSandbox for \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\"" Feb 12 19:46:54.430436 env[1106]: time="2024-02-12T19:46:54.430338866Z" level=info msg="TearDown network for sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" successfully" Feb 12 19:46:54.430436 env[1106]: time="2024-02-12T19:46:54.430385022Z" level=info msg="StopPodSandbox for \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" returns successfully" Feb 12 19:46:54.430919 env[1106]: time="2024-02-12T19:46:54.430884698Z" level=info msg="RemovePodSandbox for \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\"" Feb 12 19:46:54.431097 env[1106]: time="2024-02-12T19:46:54.431058535Z" level=info msg="Forcibly stopping sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\"" Feb 12 19:46:54.431283 env[1106]: time="2024-02-12T19:46:54.431263493Z" level=info msg="TearDown network for sandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" successfully" Feb 12 19:46:54.436480 env[1106]: time="2024-02-12T19:46:54.436425601Z" level=info msg="RemovePodSandbox \"1aec1d1f1658a1f4ddb5cdac1d483696bf31204ce3a358e86788ab3f4d646527\" returns successfully" Feb 12 19:46:54.437460 env[1106]: time="2024-02-12T19:46:54.437416538Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:46:54.437746 env[1106]: time="2024-02-12T19:46:54.437677057Z" level=info msg="TearDown network for sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" successfully" Feb 12 19:46:54.437892 env[1106]: time="2024-02-12T19:46:54.437863902Z" level=info msg="StopPodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" returns successfully" Feb 12 19:46:54.438423 env[1106]: time="2024-02-12T19:46:54.438388202Z" level=info msg="RemovePodSandbox for \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:46:54.438518 env[1106]: time="2024-02-12T19:46:54.438426182Z" level=info msg="Forcibly stopping sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\"" Feb 12 19:46:54.438518 env[1106]: time="2024-02-12T19:46:54.438508251Z" level=info msg="TearDown network for sandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" successfully" Feb 12 19:46:54.443100 env[1106]: time="2024-02-12T19:46:54.443044841Z" level=info msg="RemovePodSandbox \"191de4e5bc1809b1688596bc6f5965a002ba79e3cbcc51ce1f9f0d9a461fc2f9\" returns successfully" Feb 12 19:46:54.539495 kubelet[1990]: E0212 19:46:54.539449 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:55.203001 systemd-networkd[1004]: lxc_health: Gained IPv6LL Feb 12 19:46:55.467503 systemd[1]: run-containerd-runc-k8s.io-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56-runc.sKN4SH.mount: Deactivated successfully. Feb 12 19:46:55.542426 kubelet[1990]: E0212 19:46:55.542312 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:56.783211 kubelet[1990]: W0212 19:46:56.783021 1990 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice/cri-containerd-4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd.scope WatchSource:0}: task 4d8736a5ed2b9c0010e341fb10ef23524f40ec42e315f60972932b4dfa30a5cd not found: not found Feb 12 19:46:57.720311 systemd[1]: run-containerd-runc-k8s.io-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56-runc.4ZoUMw.mount: Deactivated successfully. Feb 12 19:46:59.765769 kubelet[1990]: E0212 19:46:59.765715 1990 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 12 19:46:59.902436 kubelet[1990]: W0212 19:46:59.901932 1990 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210b989_87e3_4106_bd4b_11e7eabcf2d3.slice/cri-containerd-99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c.scope WatchSource:0}: task 99bb833c19428dbf33695e635c7505175367e5e96f4b69b948a3d32dd49f974c not found: not found Feb 12 19:46:59.982538 systemd[1]: run-containerd-runc-k8s.io-49239b6bf01cd5e8f646138392d814ae2236a92b6a70f60b1745e5d6bea73c56-runc.v0gNGz.mount: Deactivated successfully. Feb 12 19:47:00.198230 sshd[3937]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:00.203405 systemd[1]: sshd@29-64.23.173.239:22-139.178.68.195:54786.service: Deactivated successfully. Feb 12 19:47:00.204813 systemd[1]: session-30.scope: Deactivated successfully. Feb 12 19:47:00.209054 systemd-logind[1097]: Session 30 logged out. Waiting for processes to exit. Feb 12 19:47:00.211840 systemd-logind[1097]: Removed session 30.