Feb 9 08:54:36.877077 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 08:54:36.877101 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:54:36.877115 kernel: BIOS-provided physical RAM map: Feb 9 08:54:36.877123 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 08:54:36.877130 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 08:54:36.877138 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 08:54:36.877147 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 9 08:54:36.877154 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 9 08:54:36.877164 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 08:54:36.877171 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 08:54:36.877178 kernel: NX (Execute Disable) protection: active Feb 9 08:54:36.877184 kernel: SMBIOS 2.8 present. Feb 9 08:54:36.877191 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 9 08:54:36.877197 kernel: Hypervisor detected: KVM Feb 9 08:54:36.877205 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 08:54:36.877215 kernel: kvm-clock: cpu 0, msr 47faa001, primary cpu clock Feb 9 08:54:36.877222 kernel: kvm-clock: using sched offset of 3359430120 cycles Feb 9 08:54:36.877230 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 08:54:36.877237 kernel: tsc: Detected 1995.312 MHz processor Feb 9 08:54:36.877244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 08:54:36.877251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 08:54:36.877258 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 9 08:54:36.877266 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 08:54:36.877277 kernel: ACPI: Early table checksum verification disabled Feb 9 08:54:36.877285 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 9 08:54:36.877294 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877302 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877311 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877319 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 08:54:36.877327 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877335 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877344 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877354 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 08:54:36.877362 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 9 08:54:36.877370 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 9 08:54:36.877379 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 08:54:36.877387 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 9 08:54:36.877395 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 9 08:54:36.877403 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 9 08:54:36.877411 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 9 08:54:36.877425 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 08:54:36.877434 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 08:54:36.877443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 08:54:36.877452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 9 08:54:36.877461 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 9 08:54:36.877469 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 9 08:54:36.877481 kernel: Zone ranges: Feb 9 08:54:36.877488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 08:54:36.877495 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 9 08:54:36.877502 kernel: Normal empty Feb 9 08:54:36.877509 kernel: Movable zone start for each node Feb 9 08:54:36.877517 kernel: Early memory node ranges Feb 9 08:54:36.877524 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 08:54:36.877531 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 9 08:54:36.877538 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 9 08:54:36.877547 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 08:54:36.877555 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 08:54:36.877563 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 9 08:54:36.877570 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 08:54:36.877577 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 08:54:36.877585 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 08:54:36.877592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 08:54:36.877600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 08:54:36.877607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 08:54:36.877616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 08:54:36.877625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 08:54:36.877633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 08:54:36.877642 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 08:54:36.877651 kernel: TSC deadline timer available Feb 9 08:54:36.877660 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 08:54:36.877668 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 08:54:36.877677 kernel: Booting paravirtualized kernel on KVM Feb 9 08:54:36.877686 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 08:54:36.877697 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 08:54:36.877705 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 08:54:36.877714 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 08:54:36.877763 kernel: pcpu-alloc: [0] 0 1 Feb 9 08:54:36.877772 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 08:54:36.877781 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 08:54:36.877789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 9 08:54:36.877798 kernel: Policy zone: DMA32 Feb 9 08:54:36.877808 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:54:36.877820 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 08:54:36.877828 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 08:54:36.877836 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 08:54:36.877843 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 08:54:36.877850 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 08:54:36.877858 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 08:54:36.877866 kernel: Kernel/User page tables isolation: enabled Feb 9 08:54:36.877875 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 08:54:36.877886 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 08:54:36.877894 kernel: rcu: Hierarchical RCU implementation. Feb 9 08:54:36.877904 kernel: rcu: RCU event tracing is enabled. Feb 9 08:54:36.877913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 08:54:36.877921 kernel: Rude variant of Tasks RCU enabled. Feb 9 08:54:36.877930 kernel: Tracing variant of Tasks RCU enabled. Feb 9 08:54:36.877939 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 08:54:36.877947 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 08:54:36.877956 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 08:54:36.877967 kernel: random: crng init done Feb 9 08:54:36.877976 kernel: Console: colour VGA+ 80x25 Feb 9 08:54:36.877984 kernel: printk: console [tty0] enabled Feb 9 08:54:36.877993 kernel: printk: console [ttyS0] enabled Feb 9 08:54:36.878002 kernel: ACPI: Core revision 20210730 Feb 9 08:54:36.878011 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 08:54:36.878020 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 08:54:36.878028 kernel: x2apic enabled Feb 9 08:54:36.878037 kernel: Switched APIC routing to physical x2apic. Feb 9 08:54:36.878048 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 08:54:36.878056 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 9 08:54:36.878065 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Feb 9 08:54:36.878074 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 08:54:36.878083 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 08:54:36.878092 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 08:54:36.878100 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 08:54:36.878109 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 08:54:36.878118 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 08:54:36.878130 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 9 08:54:36.878147 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 08:54:36.878157 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 08:54:36.878168 kernel: MDS: Mitigation: Clear CPU buffers Feb 9 08:54:36.878177 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 08:54:36.878186 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 08:54:36.878195 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 08:54:36.878204 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 08:54:36.878213 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 08:54:36.878222 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 08:54:36.878233 kernel: Freeing SMP alternatives memory: 32K Feb 9 08:54:36.878242 kernel: pid_max: default: 32768 minimum: 301 Feb 9 08:54:36.878251 kernel: LSM: Security Framework initializing Feb 9 08:54:36.878260 kernel: SELinux: Initializing. Feb 9 08:54:36.878270 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 08:54:36.878279 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 08:54:36.878290 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 9 08:54:36.878299 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 9 08:54:36.878309 kernel: signal: max sigframe size: 1776 Feb 9 08:54:36.878318 kernel: rcu: Hierarchical SRCU implementation. Feb 9 08:54:36.878327 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 08:54:36.878336 kernel: smp: Bringing up secondary CPUs ... Feb 9 08:54:36.878345 kernel: x86: Booting SMP configuration: Feb 9 08:54:36.878354 kernel: .... node #0, CPUs: #1 Feb 9 08:54:36.878363 kernel: kvm-clock: cpu 1, msr 47faa041, secondary cpu clock Feb 9 08:54:36.878372 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 08:54:36.878383 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 08:54:36.878393 kernel: smpboot: Max logical packages: 1 Feb 9 08:54:36.878402 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Feb 9 08:54:36.878411 kernel: devtmpfs: initialized Feb 9 08:54:36.878420 kernel: x86/mm: Memory block size: 128MB Feb 9 08:54:36.878429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 08:54:36.878439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 08:54:36.878448 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 08:54:36.878457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 08:54:36.878468 kernel: audit: initializing netlink subsys (disabled) Feb 9 08:54:36.878477 kernel: audit: type=2000 audit(1707468876.390:1): state=initialized audit_enabled=0 res=1 Feb 9 08:54:36.878486 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 08:54:36.878495 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 08:54:36.878504 kernel: cpuidle: using governor menu Feb 9 08:54:36.878513 kernel: ACPI: bus type PCI registered Feb 9 08:54:36.878522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 08:54:36.878531 kernel: dca service started, version 1.12.1 Feb 9 08:54:36.878540 kernel: PCI: Using configuration type 1 for base access Feb 9 08:54:36.878551 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 08:54:36.878561 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 08:54:36.878570 kernel: ACPI: Added _OSI(Module Device) Feb 9 08:54:36.878579 kernel: ACPI: Added _OSI(Processor Device) Feb 9 08:54:36.878588 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 08:54:36.878597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 08:54:36.878606 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 08:54:36.878615 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 08:54:36.878624 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 08:54:36.878636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 08:54:36.878645 kernel: ACPI: Interpreter enabled Feb 9 08:54:36.878654 kernel: ACPI: PM: (supports S0 S5) Feb 9 08:54:36.878663 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 08:54:36.878673 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 08:54:36.878682 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 08:54:36.878691 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 08:54:36.878911 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 08:54:36.879056 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 08:54:36.879074 kernel: acpiphp: Slot [3] registered Feb 9 08:54:36.879087 kernel: acpiphp: Slot [4] registered Feb 9 08:54:36.879100 kernel: acpiphp: Slot [5] registered Feb 9 08:54:36.879108 kernel: acpiphp: Slot [6] registered Feb 9 08:54:36.879117 kernel: acpiphp: Slot [7] registered Feb 9 08:54:36.879125 kernel: acpiphp: Slot [8] registered Feb 9 08:54:36.879133 kernel: acpiphp: Slot [9] registered Feb 9 08:54:36.879145 kernel: acpiphp: Slot [10] registered Feb 9 08:54:36.879154 kernel: acpiphp: Slot [11] registered Feb 9 08:54:36.879162 kernel: acpiphp: Slot [12] registered Feb 9 08:54:36.879171 kernel: acpiphp: Slot [13] registered Feb 9 08:54:36.879179 kernel: acpiphp: Slot [14] registered Feb 9 08:54:36.879187 kernel: acpiphp: Slot [15] registered Feb 9 08:54:36.879195 kernel: acpiphp: Slot [16] registered Feb 9 08:54:36.879204 kernel: acpiphp: Slot [17] registered Feb 9 08:54:36.879212 kernel: acpiphp: Slot [18] registered Feb 9 08:54:36.879220 kernel: acpiphp: Slot [19] registered Feb 9 08:54:36.879231 kernel: acpiphp: Slot [20] registered Feb 9 08:54:36.879240 kernel: acpiphp: Slot [21] registered Feb 9 08:54:36.879248 kernel: acpiphp: Slot [22] registered Feb 9 08:54:36.879256 kernel: acpiphp: Slot [23] registered Feb 9 08:54:36.879264 kernel: acpiphp: Slot [24] registered Feb 9 08:54:36.879272 kernel: acpiphp: Slot [25] registered Feb 9 08:54:36.879280 kernel: acpiphp: Slot [26] registered Feb 9 08:54:36.879288 kernel: acpiphp: Slot [27] registered Feb 9 08:54:36.879296 kernel: acpiphp: Slot [28] registered Feb 9 08:54:36.879307 kernel: acpiphp: Slot [29] registered Feb 9 08:54:36.879316 kernel: acpiphp: Slot [30] registered Feb 9 08:54:36.879324 kernel: acpiphp: Slot [31] registered Feb 9 08:54:36.879332 kernel: PCI host bridge to bus 0000:00 Feb 9 08:54:36.879443 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 08:54:36.879538 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 08:54:36.879632 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 08:54:36.879736 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 08:54:36.879833 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 08:54:36.879923 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 08:54:36.880048 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 08:54:36.880157 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 08:54:36.880265 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 08:54:36.880377 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 9 08:54:36.880485 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 08:54:36.880585 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 08:54:36.880685 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 08:54:36.880805 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 08:54:36.880919 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 9 08:54:36.881020 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 9 08:54:36.881131 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 08:54:36.881233 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 08:54:36.881333 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 08:54:36.881439 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 08:54:36.881542 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 08:54:36.881645 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 08:54:36.881760 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 9 08:54:36.881864 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 08:54:36.881964 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 08:54:36.882074 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 08:54:36.882176 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 9 08:54:36.882275 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 9 08:54:36.882374 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 08:54:36.882500 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 9 08:54:36.882605 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 9 08:54:36.882704 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 9 08:54:36.893951 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 08:54:36.894094 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 9 08:54:36.894201 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 9 08:54:36.894302 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 9 08:54:36.894403 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 08:54:36.894516 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 9 08:54:36.894620 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 08:54:36.894746 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 9 08:54:36.894851 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 08:54:36.894973 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 9 08:54:36.895120 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 9 08:54:36.895226 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 9 08:54:36.895332 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 9 08:54:36.895442 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 08:54:36.895542 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 9 08:54:36.895643 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 9 08:54:36.895654 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 08:54:36.895663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 08:54:36.895672 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 08:54:36.895683 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 08:54:36.895692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 08:54:36.895701 kernel: iommu: Default domain type: Translated Feb 9 08:54:36.895709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 08:54:36.895824 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 08:54:36.895927 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 08:54:36.896025 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 08:54:36.896037 kernel: vgaarb: loaded Feb 9 08:54:36.896049 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 08:54:36.896058 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 08:54:36.896067 kernel: PTP clock support registered Feb 9 08:54:36.896075 kernel: PCI: Using ACPI for IRQ routing Feb 9 08:54:36.896083 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 08:54:36.896092 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 08:54:36.896100 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 9 08:54:36.896109 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 08:54:36.896117 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 08:54:36.896128 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 08:54:36.896137 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 08:54:36.896146 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 08:54:36.896154 kernel: pnp: PnP ACPI init Feb 9 08:54:36.896163 kernel: pnp: PnP ACPI: found 4 devices Feb 9 08:54:36.896172 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 08:54:36.896180 kernel: NET: Registered PF_INET protocol family Feb 9 08:54:36.896189 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 08:54:36.896198 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 08:54:36.896208 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 08:54:36.896217 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 08:54:36.896225 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 08:54:36.896234 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 08:54:36.896242 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 08:54:36.896251 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 08:54:36.896259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 08:54:36.896268 kernel: NET: Registered PF_XDP protocol family Feb 9 08:54:36.896366 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 08:54:36.896471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 08:54:36.896563 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 08:54:36.896654 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 08:54:36.896769 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 08:54:36.896873 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 08:54:36.896975 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 08:54:36.897075 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 08:54:36.897090 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 08:54:36.897192 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 33535 usecs Feb 9 08:54:36.897203 kernel: PCI: CLS 0 bytes, default 64 Feb 9 08:54:36.897211 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 08:54:36.897220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 9 08:54:36.897229 kernel: Initialise system trusted keyrings Feb 9 08:54:36.897237 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 08:54:36.897246 kernel: Key type asymmetric registered Feb 9 08:54:36.897254 kernel: Asymmetric key parser 'x509' registered Feb 9 08:54:36.897265 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 08:54:36.897273 kernel: io scheduler mq-deadline registered Feb 9 08:54:36.897283 kernel: io scheduler kyber registered Feb 9 08:54:36.897291 kernel: io scheduler bfq registered Feb 9 08:54:36.897300 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 08:54:36.897309 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 08:54:36.897318 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 08:54:36.897326 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 08:54:36.897335 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 08:54:36.897344 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 08:54:36.897355 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 08:54:36.897364 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 08:54:36.897372 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 08:54:36.897494 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 9 08:54:36.897507 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 08:54:36.897600 kernel: rtc_cmos 00:03: registered as rtc0 Feb 9 08:54:36.897694 kernel: rtc_cmos 00:03: setting system clock to 2024-02-09T08:54:36 UTC (1707468876) Feb 9 08:54:36.897838 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 9 08:54:36.897849 kernel: intel_pstate: CPU model not supported Feb 9 08:54:36.897857 kernel: NET: Registered PF_INET6 protocol family Feb 9 08:54:36.897866 kernel: Segment Routing with IPv6 Feb 9 08:54:36.897874 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 08:54:36.897883 kernel: NET: Registered PF_PACKET protocol family Feb 9 08:54:36.897891 kernel: Key type dns_resolver registered Feb 9 08:54:36.897900 kernel: IPI shorthand broadcast: enabled Feb 9 08:54:36.897909 kernel: sched_clock: Marking stable (694610575, 112219304)->(922222537, -115392658) Feb 9 08:54:36.897920 kernel: registered taskstats version 1 Feb 9 08:54:36.897928 kernel: Loading compiled-in X.509 certificates Feb 9 08:54:36.897937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 08:54:36.897945 kernel: Key type .fscrypt registered Feb 9 08:54:36.897953 kernel: Key type fscrypt-provisioning registered Feb 9 08:54:36.897962 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 08:54:36.897970 kernel: ima: Allocated hash algorithm: sha1 Feb 9 08:54:36.897979 kernel: ima: No architecture policies found Feb 9 08:54:36.897988 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 08:54:36.897998 kernel: Write protecting the kernel read-only data: 28672k Feb 9 08:54:36.898007 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 08:54:36.898015 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 08:54:36.898023 kernel: Run /init as init process Feb 9 08:54:36.898032 kernel: with arguments: Feb 9 08:54:36.898041 kernel: /init Feb 9 08:54:36.898068 kernel: with environment: Feb 9 08:54:36.898079 kernel: HOME=/ Feb 9 08:54:36.898088 kernel: TERM=linux Feb 9 08:54:36.898099 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 08:54:36.898112 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 08:54:36.898127 systemd[1]: Detected virtualization kvm. Feb 9 08:54:36.898137 systemd[1]: Detected architecture x86-64. Feb 9 08:54:36.898147 systemd[1]: Running in initrd. Feb 9 08:54:36.898156 systemd[1]: No hostname configured, using default hostname. Feb 9 08:54:36.898165 systemd[1]: Hostname set to . Feb 9 08:54:36.898177 systemd[1]: Initializing machine ID from VM UUID. Feb 9 08:54:36.898187 systemd[1]: Queued start job for default target initrd.target. Feb 9 08:54:36.898196 systemd[1]: Started systemd-ask-password-console.path. Feb 9 08:54:36.898205 systemd[1]: Reached target cryptsetup.target. Feb 9 08:54:36.898214 systemd[1]: Reached target paths.target. Feb 9 08:54:36.898223 systemd[1]: Reached target slices.target. Feb 9 08:54:36.898233 systemd[1]: Reached target swap.target. Feb 9 08:54:36.898242 systemd[1]: Reached target timers.target. Feb 9 08:54:36.898255 systemd[1]: Listening on iscsid.socket. Feb 9 08:54:36.898265 systemd[1]: Listening on iscsiuio.socket. Feb 9 08:54:36.898274 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 08:54:36.898283 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 08:54:36.898293 systemd[1]: Listening on systemd-journald.socket. Feb 9 08:54:36.898302 systemd[1]: Listening on systemd-networkd.socket. Feb 9 08:54:36.898311 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 08:54:36.898321 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 08:54:36.898333 systemd[1]: Reached target sockets.target. Feb 9 08:54:36.898342 systemd[1]: Starting kmod-static-nodes.service... Feb 9 08:54:36.898352 systemd[1]: Finished network-cleanup.service. Feb 9 08:54:36.898363 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 08:54:36.898373 systemd[1]: Starting systemd-journald.service... Feb 9 08:54:36.898382 systemd[1]: Starting systemd-modules-load.service... Feb 9 08:54:36.898394 systemd[1]: Starting systemd-resolved.service... Feb 9 08:54:36.898403 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 08:54:36.898413 systemd[1]: Finished kmod-static-nodes.service. Feb 9 08:54:36.898422 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 08:54:36.898439 systemd-journald[183]: Journal started Feb 9 08:54:36.898499 systemd-journald[183]: Runtime Journal (/run/log/journal/70ff895dffad427aadf8ca85689e7060) is 4.9M, max 39.5M, 34.5M free. Feb 9 08:54:36.886777 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 08:54:36.931593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 08:54:36.931621 systemd[1]: Started systemd-journald.service. Feb 9 08:54:36.931638 kernel: Bridge firewalling registered Feb 9 08:54:36.931651 kernel: audit: type=1130 audit(1707468876.925:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.900641 systemd-resolved[185]: Positive Trust Anchors: Feb 9 08:54:36.935614 kernel: audit: type=1130 audit(1707468876.931:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.900650 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 08:54:36.940276 kernel: audit: type=1130 audit(1707468876.935:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.900681 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 08:54:36.903465 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 08:54:36.932058 systemd[1]: Started systemd-resolved.service. Feb 9 08:54:36.950142 kernel: audit: type=1130 audit(1707468876.945:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.932117 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 08:54:36.936095 systemd[1]: Reached target nss-lookup.target. Feb 9 08:54:36.941401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 08:54:36.945404 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 08:54:36.952647 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 08:54:36.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.961879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 08:54:36.966153 kernel: audit: type=1130 audit(1707468876.961:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.967728 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 08:54:36.973505 kernel: audit: type=1130 audit(1707468876.967:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:36.973532 kernel: SCSI subsystem initialized Feb 9 08:54:36.969083 systemd[1]: Starting dracut-cmdline.service... Feb 9 08:54:36.983627 dracut-cmdline[201]: dracut-dracut-053 Feb 9 08:54:36.987457 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 08:54:37.000620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 08:54:37.000649 kernel: device-mapper: uevent: version 1.0.3 Feb 9 08:54:37.000661 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 08:54:37.000048 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 08:54:37.000810 systemd[1]: Finished systemd-modules-load.service. Feb 9 08:54:37.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.002528 systemd[1]: Starting systemd-sysctl.service... Feb 9 08:54:37.010278 kernel: audit: type=1130 audit(1707468877.001:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.012874 systemd[1]: Finished systemd-sysctl.service. Feb 9 08:54:37.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.018304 kernel: audit: type=1130 audit(1707468877.012:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.074746 kernel: Loading iSCSI transport class v2.0-870. Feb 9 08:54:37.089953 kernel: iscsi: registered transport (tcp) Feb 9 08:54:37.116950 kernel: iscsi: registered transport (qla4xxx) Feb 9 08:54:37.117022 kernel: QLogic iSCSI HBA Driver Feb 9 08:54:37.160363 systemd[1]: Finished dracut-cmdline.service. Feb 9 08:54:37.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.164745 kernel: audit: type=1130 audit(1707468877.160:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.162178 systemd[1]: Starting dracut-pre-udev.service... Feb 9 08:54:37.220797 kernel: raid6: avx2x4 gen() 31163 MB/s Feb 9 08:54:37.237796 kernel: raid6: avx2x4 xor() 9299 MB/s Feb 9 08:54:37.254774 kernel: raid6: avx2x2 gen() 29396 MB/s Feb 9 08:54:37.271966 kernel: raid6: avx2x2 xor() 17874 MB/s Feb 9 08:54:37.288791 kernel: raid6: avx2x1 gen() 24919 MB/s Feb 9 08:54:37.305777 kernel: raid6: avx2x1 xor() 14122 MB/s Feb 9 08:54:37.322786 kernel: raid6: sse2x4 gen() 12231 MB/s Feb 9 08:54:37.339799 kernel: raid6: sse2x4 xor() 5934 MB/s Feb 9 08:54:37.356797 kernel: raid6: sse2x2 gen() 10841 MB/s Feb 9 08:54:37.373807 kernel: raid6: sse2x2 xor() 6429 MB/s Feb 9 08:54:37.390790 kernel: raid6: sse2x1 gen() 8818 MB/s Feb 9 08:54:37.408384 kernel: raid6: sse2x1 xor() 5047 MB/s Feb 9 08:54:37.408457 kernel: raid6: using algorithm avx2x4 gen() 31163 MB/s Feb 9 08:54:37.408470 kernel: raid6: .... xor() 9299 MB/s, rmw enabled Feb 9 08:54:37.409155 kernel: raid6: using avx2x2 recovery algorithm Feb 9 08:54:37.425760 kernel: xor: automatically using best checksumming function avx Feb 9 08:54:37.548769 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 08:54:37.560391 systemd[1]: Finished dracut-pre-udev.service. Feb 9 08:54:37.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.560000 audit: BPF prog-id=7 op=LOAD Feb 9 08:54:37.560000 audit: BPF prog-id=8 op=LOAD Feb 9 08:54:37.561921 systemd[1]: Starting systemd-udevd.service... Feb 9 08:54:37.576534 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 08:54:37.581612 systemd[1]: Started systemd-udevd.service. Feb 9 08:54:37.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.586124 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 08:54:37.601963 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 9 08:54:37.639627 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 08:54:37.640999 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 08:54:37.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.694184 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 08:54:37.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:37.763768 kernel: libata version 3.00 loaded. Feb 9 08:54:37.767775 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 08:54:37.769987 kernel: scsi host0: ata_piix Feb 9 08:54:37.770226 kernel: scsi host2: ata_piix Feb 9 08:54:37.770377 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 9 08:54:37.771149 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 9 08:54:37.775420 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 9 08:54:37.778789 kernel: scsi host1: Virtio SCSI HBA Feb 9 08:54:37.794746 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 08:54:37.815354 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 08:54:37.815428 kernel: GPT:9289727 != 125829119 Feb 9 08:54:37.815442 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 08:54:37.815454 kernel: GPT:9289727 != 125829119 Feb 9 08:54:37.815466 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 08:54:37.815477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 08:54:37.826329 kernel: virtio_blk virtio5: [vdb] 1000 512-byte logical blocks (512 kB/500 KiB) Feb 9 08:54:37.835742 kernel: ACPI: bus type USB registered Feb 9 08:54:37.835797 kernel: usbcore: registered new interface driver usbfs Feb 9 08:54:37.835810 kernel: usbcore: registered new interface driver hub Feb 9 08:54:37.835821 kernel: usbcore: registered new device driver usb Feb 9 08:54:37.968187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 08:54:37.969773 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Feb 9 08:54:37.971248 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 08:54:37.971778 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 08:54:37.978213 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 08:54:37.989702 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 08:54:37.987790 systemd[1]: Starting disk-uuid.service... Feb 9 08:54:37.993672 disk-uuid[462]: Primary Header is updated. Feb 9 08:54:37.993672 disk-uuid[462]: Secondary Entries is updated. Feb 9 08:54:37.993672 disk-uuid[462]: Secondary Header is updated. Feb 9 08:54:38.003750 kernel: AES CTR mode by8 optimization enabled Feb 9 08:54:38.006744 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 9 08:54:38.008743 kernel: ehci-pci: EHCI PCI platform driver Feb 9 08:54:38.016395 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 08:54:38.023753 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 9 08:54:38.051748 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 9 08:54:38.051970 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 9 08:54:38.053945 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 9 08:54:38.056745 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 9 08:54:38.056998 kernel: hub 1-0:1.0: USB hub found Feb 9 08:54:38.059704 kernel: hub 1-0:1.0: 2 ports detected Feb 9 08:54:39.007561 disk-uuid[463]: The operation has completed successfully. Feb 9 08:54:39.008299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 08:54:39.039479 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 08:54:39.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.039585 systemd[1]: Finished disk-uuid.service. Feb 9 08:54:39.040992 systemd[1]: Starting verity-setup.service... Feb 9 08:54:39.060000 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 08:54:39.113471 systemd[1]: Found device dev-mapper-usr.device. Feb 9 08:54:39.115448 systemd[1]: Mounting sysusr-usr.mount... Feb 9 08:54:39.116802 systemd[1]: Finished verity-setup.service. Feb 9 08:54:39.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.216768 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 08:54:39.216225 systemd[1]: Mounted sysusr-usr.mount. Feb 9 08:54:39.216801 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 08:54:39.217835 systemd[1]: Starting ignition-setup.service... Feb 9 08:54:39.220092 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 08:54:39.231483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 08:54:39.231551 kernel: BTRFS info (device vda6): using free space tree Feb 9 08:54:39.231564 kernel: BTRFS info (device vda6): has skinny extents Feb 9 08:54:39.247942 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 08:54:39.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.258110 systemd[1]: Finished ignition-setup.service. Feb 9 08:54:39.261303 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 08:54:39.404032 ignition[603]: Ignition 2.14.0 Feb 9 08:54:39.404044 ignition[603]: Stage: fetch-offline Feb 9 08:54:39.404128 ignition[603]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:39.404155 ignition[603]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:39.410042 ignition[603]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:39.410186 ignition[603]: parsed url from cmdline: "" Feb 9 08:54:39.410190 ignition[603]: no config URL provided Feb 9 08:54:39.410196 ignition[603]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 08:54:39.410207 ignition[603]: no config at "/usr/lib/ignition/user.ign" Feb 9 08:54:39.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.412532 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 08:54:39.413000 audit: BPF prog-id=9 op=LOAD Feb 9 08:54:39.410213 ignition[603]: failed to fetch config: resource requires networking Feb 9 08:54:39.414592 systemd[1]: Starting systemd-networkd.service... Feb 9 08:54:39.410348 ignition[603]: Ignition finished successfully Feb 9 08:54:39.415612 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 08:54:39.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.442585 systemd-networkd[689]: lo: Link UP Feb 9 08:54:39.442603 systemd-networkd[689]: lo: Gained carrier Feb 9 08:54:39.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.443365 systemd-networkd[689]: Enumeration completed Feb 9 08:54:39.443493 systemd[1]: Started systemd-networkd.service. Feb 9 08:54:39.443952 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 08:54:39.444637 systemd[1]: Reached target network.target. Feb 9 08:54:39.446027 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 9 08:54:39.447463 systemd[1]: Starting ignition-fetch.service... Feb 9 08:54:39.449211 systemd[1]: Starting iscsiuio.service... Feb 9 08:54:39.459214 systemd-networkd[689]: eth1: Link UP Feb 9 08:54:39.459220 systemd-networkd[689]: eth1: Gained carrier Feb 9 08:54:39.469509 ignition[691]: Ignition 2.14.0 Feb 9 08:54:39.471221 systemd-networkd[689]: eth0: Link UP Feb 9 08:54:39.469520 ignition[691]: Stage: fetch Feb 9 08:54:39.471228 systemd-networkd[689]: eth0: Gained carrier Feb 9 08:54:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.469701 ignition[691]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:39.480398 systemd[1]: Started iscsiuio.service. Feb 9 08:54:39.469748 ignition[691]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:39.482746 systemd[1]: Starting iscsid.service... Feb 9 08:54:39.489974 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 08:54:39.489974 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 08:54:39.489974 iscsid[699]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 08:54:39.489974 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 08:54:39.489974 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 08:54:39.489974 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 08:54:39.489974 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 08:54:39.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.474935 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:39.487843 systemd-networkd[689]: eth0: DHCPv4 address 143.198.159.117/20, gateway 143.198.144.1 acquired from 169.254.169.253 Feb 9 08:54:39.475086 ignition[691]: parsed url from cmdline: "" Feb 9 08:54:39.491204 systemd[1]: Started iscsid.service. Feb 9 08:54:39.475091 ignition[691]: no config URL provided Feb 9 08:54:39.493936 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.13/20 acquired from 169.254.169.253 Feb 9 08:54:39.475097 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 08:54:39.495386 systemd[1]: Starting dracut-initqueue.service... Feb 9 08:54:39.475109 ignition[691]: no config at "/usr/lib/ignition/user.ign" Feb 9 08:54:39.475145 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 9 08:54:39.489433 ignition[691]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 9 08:54:39.510709 systemd[1]: Finished dracut-initqueue.service. Feb 9 08:54:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.511412 systemd[1]: Reached target remote-fs-pre.target. Feb 9 08:54:39.512140 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 08:54:39.512967 systemd[1]: Reached target remote-fs.target. Feb 9 08:54:39.514769 systemd[1]: Starting dracut-pre-mount.service... Feb 9 08:54:39.525244 systemd[1]: Finished dracut-pre-mount.service. Feb 9 08:54:39.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.690004 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Feb 9 08:54:39.723053 ignition[691]: GET result: OK Feb 9 08:54:39.723202 ignition[691]: parsing config with SHA512: d8519cd27e2298d1c45b83048d7a9c56687e6afa9a59e89de50c8f01f793eb29f8d6bfdcacd71b16e05f8cd8e69722d04ba4096a4bb4619a7e08024491b6a4ae Feb 9 08:54:39.763409 unknown[691]: fetched base config from "system" Feb 9 08:54:39.763429 unknown[691]: fetched base config from "system" Feb 9 08:54:39.764024 ignition[691]: fetch: fetch complete Feb 9 08:54:39.763440 unknown[691]: fetched user config from "digitalocean" Feb 9 08:54:39.764031 ignition[691]: fetch: fetch passed Feb 9 08:54:39.764076 ignition[691]: Ignition finished successfully Feb 9 08:54:39.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.768015 systemd[1]: Finished ignition-fetch.service. Feb 9 08:54:39.769455 systemd[1]: Starting ignition-kargs.service... Feb 9 08:54:39.780899 ignition[714]: Ignition 2.14.0 Feb 9 08:54:39.780919 ignition[714]: Stage: kargs Feb 9 08:54:39.781056 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:39.781075 ignition[714]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:39.782525 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:39.784456 ignition[714]: kargs: kargs passed Feb 9 08:54:39.784521 ignition[714]: Ignition finished successfully Feb 9 08:54:39.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.786745 systemd[1]: Finished ignition-kargs.service. Feb 9 08:54:39.788159 systemd[1]: Starting ignition-disks.service... Feb 9 08:54:39.796404 ignition[720]: Ignition 2.14.0 Feb 9 08:54:39.797084 ignition[720]: Stage: disks Feb 9 08:54:39.797632 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:39.798230 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:39.800154 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:39.803464 ignition[720]: disks: disks passed Feb 9 08:54:39.804192 ignition[720]: Ignition finished successfully Feb 9 08:54:39.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.805514 systemd[1]: Finished ignition-disks.service. Feb 9 08:54:39.806112 systemd[1]: Reached target initrd-root-device.target. Feb 9 08:54:39.806541 systemd[1]: Reached target local-fs-pre.target. Feb 9 08:54:39.806948 systemd[1]: Reached target local-fs.target. Feb 9 08:54:39.807361 systemd[1]: Reached target sysinit.target. Feb 9 08:54:39.807707 systemd[1]: Reached target basic.target. Feb 9 08:54:39.811549 systemd[1]: Starting systemd-fsck-root.service... Feb 9 08:54:39.828408 systemd-fsck[728]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 08:54:39.832446 systemd[1]: Finished systemd-fsck-root.service. Feb 9 08:54:39.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:39.833856 systemd[1]: Mounting sysroot.mount... Feb 9 08:54:39.843755 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 08:54:39.844237 systemd[1]: Mounted sysroot.mount. Feb 9 08:54:39.845079 systemd[1]: Reached target initrd-root-fs.target. Feb 9 08:54:39.847779 systemd[1]: Mounting sysroot-usr.mount... Feb 9 08:54:39.849492 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 9 08:54:39.851871 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 08:54:39.852584 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 08:54:39.852641 systemd[1]: Reached target ignition-diskful.target. Feb 9 08:54:39.859650 systemd[1]: Mounted sysroot-usr.mount. Feb 9 08:54:39.863413 systemd[1]: Starting initrd-setup-root.service... Feb 9 08:54:39.880955 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 08:54:39.892664 initrd-setup-root[748]: cut: /sysroot/etc/group: No such file or directory Feb 9 08:54:39.906429 initrd-setup-root[758]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 08:54:39.922483 initrd-setup-root[768]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 08:54:39.987613 coreos-metadata[735]: Feb 09 08:54:39.987 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 9 08:54:40.002065 coreos-metadata[735]: Feb 09 08:54:40.002 INFO Fetch successful Feb 9 08:54:40.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.005441 systemd[1]: Finished initrd-setup-root.service. Feb 9 08:54:40.007539 systemd[1]: Starting ignition-mount.service... Feb 9 08:54:40.009464 systemd[1]: Starting sysroot-boot.service... Feb 9 08:54:40.017985 coreos-metadata[735]: Feb 09 08:54:40.017 INFO wrote hostname ci-3510.3.2-6-9c47918d0b to /sysroot/etc/hostname Feb 9 08:54:40.022617 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 08:54:40.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.032534 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 08:54:40.038334 coreos-metadata[734]: Feb 09 08:54:40.038 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 9 08:54:40.046823 ignition[787]: INFO : Ignition 2.14.0 Feb 9 08:54:40.047649 ignition[787]: INFO : Stage: mount Feb 9 08:54:40.048807 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:40.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.049920 systemd[1]: Finished sysroot-boot.service. Feb 9 08:54:40.051745 coreos-metadata[734]: Feb 09 08:54:40.051 INFO Fetch successful Feb 9 08:54:40.055857 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:40.059372 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:40.061611 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 9 08:54:40.061739 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 9 08:54:40.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.064244 ignition[787]: INFO : mount: mount passed Feb 9 08:54:40.064244 ignition[787]: INFO : Ignition finished successfully Feb 9 08:54:40.065973 systemd[1]: Finished ignition-mount.service. Feb 9 08:54:40.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:40.133232 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 08:54:40.140767 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (795) Feb 9 08:54:40.151748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 08:54:40.151819 kernel: BTRFS info (device vda6): using free space tree Feb 9 08:54:40.151832 kernel: BTRFS info (device vda6): has skinny extents Feb 9 08:54:40.157788 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 08:54:40.159642 systemd[1]: Starting ignition-files.service... Feb 9 08:54:40.179341 ignition[815]: INFO : Ignition 2.14.0 Feb 9 08:54:40.179341 ignition[815]: INFO : Stage: files Feb 9 08:54:40.180556 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:40.180556 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:40.182282 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:40.187739 ignition[815]: DEBUG : files: compiled without relabeling support, skipping Feb 9 08:54:40.189435 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 08:54:40.189435 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 08:54:40.193325 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 08:54:40.194228 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 08:54:40.195005 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 08:54:40.194450 unknown[815]: wrote ssh authorized keys file for user: core Feb 9 08:54:40.196736 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 08:54:40.196736 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 08:54:40.221523 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 08:54:40.274478 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 08:54:40.274478 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 08:54:40.276345 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 08:54:40.753902 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 08:54:40.777957 systemd-networkd[689]: eth1: Gained IPv6LL Feb 9 08:54:40.841929 systemd-networkd[689]: eth0: Gained IPv6LL Feb 9 08:54:40.962934 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 08:54:40.964236 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 08:54:40.964236 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 08:54:40.964236 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 08:54:41.368216 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 08:54:41.485512 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 08:54:41.486843 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 08:54:41.486843 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 08:54:41.486843 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 08:54:41.486843 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 08:54:41.486843 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 08:54:41.547104 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 08:54:41.793259 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 08:54:41.793259 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 08:54:41.795445 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 08:54:41.795445 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 08:54:41.839232 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 08:54:42.088475 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 08:54:42.088475 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 08:54:42.090869 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 08:54:42.090869 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 08:54:42.135444 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 08:54:42.874584 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 08:54:42.876266 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 08:54:42.886208 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(11): [started] processing unit "prepare-helm.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(11): op(12): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(11): op(12): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(11): [finished] processing unit "prepare-helm.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(13): [started] processing unit "containerd.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(13): [finished] processing unit "containerd.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 08:54:42.886208 ignition[815]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 08:54:42.932017 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 9 08:54:42.932055 kernel: audit: type=1130 audit(1707468882.891:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.932071 kernel: audit: type=1130 audit(1707468882.907:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.932085 kernel: audit: type=1130 audit(1707468882.919:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.932098 kernel: audit: type=1131 audit(1707468882.919:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.890487 systemd[1]: Finished ignition-files.service. Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 08:54:42.934295 ignition[815]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 08:54:42.934295 ignition[815]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 08:54:42.934295 ignition[815]: INFO : files: files passed Feb 9 08:54:42.934295 ignition[815]: INFO : Ignition finished successfully Feb 9 08:54:42.893301 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 08:54:42.902846 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 08:54:42.946543 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 08:54:42.953825 kernel: audit: type=1130 audit(1707468882.946:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.953865 kernel: audit: type=1131 audit(1707468882.946:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.903768 systemd[1]: Starting ignition-quench.service... Feb 9 08:54:42.906538 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 08:54:42.908803 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 08:54:42.908906 systemd[1]: Finished ignition-quench.service. Feb 9 08:54:42.920319 systemd[1]: Reached target ignition-complete.target. Feb 9 08:54:42.928115 systemd[1]: Starting initrd-parse-etc.service... Feb 9 08:54:42.945402 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 08:54:42.945521 systemd[1]: Finished initrd-parse-etc.service. Feb 9 08:54:42.947254 systemd[1]: Reached target initrd-fs.target. Feb 9 08:54:42.954230 systemd[1]: Reached target initrd.target. Feb 9 08:54:42.955061 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 08:54:42.955986 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 08:54:42.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.971896 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 08:54:42.976286 kernel: audit: type=1130 audit(1707468882.971:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.976506 systemd[1]: Starting initrd-cleanup.service... Feb 9 08:54:42.986815 systemd[1]: Stopped target nss-lookup.target. Feb 9 08:54:42.988064 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 08:54:42.989136 systemd[1]: Stopped target timers.target. Feb 9 08:54:42.990089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 08:54:42.990816 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 08:54:42.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.992131 systemd[1]: Stopped target initrd.target. Feb 9 08:54:42.995782 kernel: audit: type=1131 audit(1707468882.991:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:42.996349 systemd[1]: Stopped target basic.target. Feb 9 08:54:42.996927 systemd[1]: Stopped target ignition-complete.target. Feb 9 08:54:42.998032 systemd[1]: Stopped target ignition-diskful.target. Feb 9 08:54:42.998919 systemd[1]: Stopped target initrd-root-device.target. Feb 9 08:54:42.999983 systemd[1]: Stopped target remote-fs.target. Feb 9 08:54:43.000829 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 08:54:43.001756 systemd[1]: Stopped target sysinit.target. Feb 9 08:54:43.002584 systemd[1]: Stopped target local-fs.target. Feb 9 08:54:43.003574 systemd[1]: Stopped target local-fs-pre.target. Feb 9 08:54:43.004385 systemd[1]: Stopped target swap.target. Feb 9 08:54:43.005169 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 08:54:43.009684 kernel: audit: type=1131 audit(1707468883.005:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.005294 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 08:54:43.006191 systemd[1]: Stopped target cryptsetup.target. Feb 9 08:54:43.014751 kernel: audit: type=1131 audit(1707468883.010:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.010225 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 08:54:43.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.010383 systemd[1]: Stopped dracut-initqueue.service. Feb 9 08:54:43.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.011219 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 08:54:43.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.011395 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 08:54:43.015503 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 08:54:43.015663 systemd[1]: Stopped ignition-files.service. Feb 9 08:54:43.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.016242 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 08:54:43.033955 iscsid[699]: iscsid shutting down. Feb 9 08:54:43.016359 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 08:54:43.018526 systemd[1]: Stopping ignition-mount.service... Feb 9 08:54:43.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.019545 systemd[1]: Stopping iscsid.service... Feb 9 08:54:43.041443 ignition[854]: INFO : Ignition 2.14.0 Feb 9 08:54:43.041443 ignition[854]: INFO : Stage: umount Feb 9 08:54:43.041443 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 08:54:43.041443 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 9 08:54:43.025258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 08:54:43.025434 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 08:54:43.026744 systemd[1]: Stopping sysroot-boot.service... Feb 9 08:54:43.027216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 08:54:43.027374 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 08:54:43.027954 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 08:54:43.028049 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 08:54:43.030060 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 08:54:43.030188 systemd[1]: Stopped iscsid.service. Feb 9 08:54:43.032621 systemd[1]: Stopping iscsiuio.service... Feb 9 08:54:43.033355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 08:54:43.033473 systemd[1]: Finished initrd-cleanup.service. Feb 9 08:54:43.035971 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 08:54:43.036770 systemd[1]: Stopped iscsiuio.service. Feb 9 08:54:43.055001 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 9 08:54:43.066626 ignition[854]: INFO : umount: umount passed Feb 9 08:54:43.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.072253 ignition[854]: INFO : Ignition finished successfully Feb 9 08:54:43.067594 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 08:54:43.067709 systemd[1]: Stopped ignition-mount.service. Feb 9 08:54:43.068282 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 08:54:43.068341 systemd[1]: Stopped ignition-disks.service. Feb 9 08:54:43.068792 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 08:54:43.068832 systemd[1]: Stopped ignition-kargs.service. Feb 9 08:54:43.069239 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 08:54:43.069273 systemd[1]: Stopped ignition-fetch.service. Feb 9 08:54:43.069673 systemd[1]: Stopped target network.target. Feb 9 08:54:43.070076 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 08:54:43.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.070116 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 08:54:43.071563 systemd[1]: Stopped target paths.target. Feb 9 08:54:43.071962 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 08:54:43.075824 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 08:54:43.076469 systemd[1]: Stopped target slices.target. Feb 9 08:54:43.076849 systemd[1]: Stopped target sockets.target. Feb 9 08:54:43.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.077245 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 08:54:43.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.077288 systemd[1]: Closed iscsid.socket. Feb 9 08:54:43.077660 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 08:54:43.077690 systemd[1]: Closed iscsiuio.socket. Feb 9 08:54:43.078128 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 08:54:43.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.078186 systemd[1]: Stopped ignition-setup.service. Feb 9 08:54:43.079486 systemd[1]: Stopping systemd-networkd.service... Feb 9 08:54:43.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.080457 systemd[1]: Stopping systemd-resolved.service... Feb 9 08:54:43.091000 audit: BPF prog-id=6 op=UNLOAD Feb 9 08:54:43.082794 systemd-networkd[689]: eth1: DHCPv6 lease lost Feb 9 08:54:43.083158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 08:54:43.083819 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 08:54:43.083921 systemd[1]: Stopped sysroot-boot.service. Feb 9 08:54:43.084953 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 08:54:43.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.085001 systemd[1]: Stopped initrd-setup-root.service. Feb 9 08:54:43.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.087817 systemd-networkd[689]: eth0: DHCPv6 lease lost Feb 9 08:54:43.098000 audit: BPF prog-id=9 op=UNLOAD Feb 9 08:54:43.088978 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 08:54:43.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.089089 systemd[1]: Stopped systemd-resolved.service. Feb 9 08:54:43.090541 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 08:54:43.090644 systemd[1]: Stopped systemd-networkd.service. Feb 9 08:54:43.092396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 08:54:43.092440 systemd[1]: Closed systemd-networkd.socket. Feb 9 08:54:43.093907 systemd[1]: Stopping network-cleanup.service... Feb 9 08:54:43.094394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 08:54:43.094455 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 08:54:43.097538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 08:54:43.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.097600 systemd[1]: Stopped systemd-sysctl.service. Feb 9 08:54:43.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.098793 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 08:54:43.098847 systemd[1]: Stopped systemd-modules-load.service. Feb 9 08:54:43.102076 systemd[1]: Stopping systemd-udevd.service... Feb 9 08:54:43.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.106182 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 08:54:43.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.109689 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 08:54:43.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.109818 systemd[1]: Stopped network-cleanup.service. Feb 9 08:54:43.111340 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 08:54:43.111488 systemd[1]: Stopped systemd-udevd.service. Feb 9 08:54:43.112300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 08:54:43.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.112342 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 08:54:43.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:43.112937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 08:54:43.112974 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 08:54:43.113849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 08:54:43.113887 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 08:54:43.114682 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 08:54:43.114735 systemd[1]: Stopped dracut-cmdline.service. Feb 9 08:54:43.115492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 08:54:43.115531 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 08:54:43.117205 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 08:54:43.117863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 08:54:43.117938 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 08:54:43.125844 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 08:54:43.125968 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 08:54:43.126615 systemd[1]: Reached target initrd-switch-root.target. Feb 9 08:54:43.128193 systemd[1]: Starting initrd-switch-root.service... Feb 9 08:54:43.137701 systemd[1]: Switching root. Feb 9 08:54:43.137000 audit: BPF prog-id=8 op=UNLOAD Feb 9 08:54:43.138000 audit: BPF prog-id=7 op=UNLOAD Feb 9 08:54:43.139000 audit: BPF prog-id=5 op=UNLOAD Feb 9 08:54:43.139000 audit: BPF prog-id=4 op=UNLOAD Feb 9 08:54:43.139000 audit: BPF prog-id=3 op=UNLOAD Feb 9 08:54:43.155849 systemd-journald[183]: Journal stopped Feb 9 08:54:46.944778 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 08:54:46.944892 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 08:54:46.944920 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 08:54:46.944947 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 08:54:46.944967 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 08:54:46.944985 kernel: SELinux: policy capability open_perms=1 Feb 9 08:54:46.945010 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 08:54:46.945029 kernel: SELinux: policy capability always_check_network=0 Feb 9 08:54:46.945046 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 08:54:46.945063 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 08:54:46.945079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 08:54:46.945098 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 08:54:46.945118 systemd[1]: Successfully loaded SELinux policy in 49.330ms. Feb 9 08:54:46.945162 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.977ms. Feb 9 08:54:46.945187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 08:54:46.945212 systemd[1]: Detected virtualization kvm. Feb 9 08:54:46.945233 systemd[1]: Detected architecture x86-64. Feb 9 08:54:46.945250 systemd[1]: Detected first boot. Feb 9 08:54:46.945269 systemd[1]: Hostname set to . Feb 9 08:54:46.945288 systemd[1]: Initializing machine ID from VM UUID. Feb 9 08:54:46.945307 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 08:54:46.945330 systemd[1]: Populated /etc with preset unit settings. Feb 9 08:54:46.945350 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:54:46.945371 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:54:46.945393 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:54:46.945414 systemd[1]: Queued start job for default target multi-user.target. Feb 9 08:54:46.945431 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 08:54:46.945450 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 08:54:46.945469 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 08:54:46.945492 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 08:54:46.945511 systemd[1]: Created slice system-getty.slice. Feb 9 08:54:46.945529 systemd[1]: Created slice system-modprobe.slice. Feb 9 08:54:46.945547 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 08:54:46.945566 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 08:54:46.945583 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 08:54:46.945603 systemd[1]: Created slice user.slice. Feb 9 08:54:46.945623 systemd[1]: Started systemd-ask-password-console.path. Feb 9 08:54:46.945648 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 08:54:46.945672 systemd[1]: Set up automount boot.automount. Feb 9 08:54:46.945693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 08:54:46.945711 systemd[1]: Reached target integritysetup.target. Feb 9 08:54:46.963819 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 08:54:46.963853 systemd[1]: Reached target remote-fs.target. Feb 9 08:54:46.963877 systemd[1]: Reached target slices.target. Feb 9 08:54:46.963898 systemd[1]: Reached target swap.target. Feb 9 08:54:46.963925 systemd[1]: Reached target torcx.target. Feb 9 08:54:46.963946 systemd[1]: Reached target veritysetup.target. Feb 9 08:54:46.963965 systemd[1]: Listening on systemd-coredump.socket. Feb 9 08:54:46.963985 systemd[1]: Listening on systemd-initctl.socket. Feb 9 08:54:46.964003 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 08:54:46.964022 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 08:54:46.964042 systemd[1]: Listening on systemd-journald.socket. Feb 9 08:54:46.964063 systemd[1]: Listening on systemd-networkd.socket. Feb 9 08:54:46.964082 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 08:54:46.964099 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 08:54:46.964124 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 08:54:46.964146 systemd[1]: Mounting dev-hugepages.mount... Feb 9 08:54:46.964165 systemd[1]: Mounting dev-mqueue.mount... Feb 9 08:54:46.964182 systemd[1]: Mounting media.mount... Feb 9 08:54:46.964203 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:54:46.964224 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 08:54:46.964242 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 08:54:46.964262 systemd[1]: Mounting tmp.mount... Feb 9 08:54:46.964281 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 08:54:46.964306 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 08:54:46.964327 systemd[1]: Starting kmod-static-nodes.service... Feb 9 08:54:46.964345 systemd[1]: Starting modprobe@configfs.service... Feb 9 08:54:46.964364 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 08:54:46.964383 systemd[1]: Starting modprobe@drm.service... Feb 9 08:54:46.964401 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 08:54:46.964419 systemd[1]: Starting modprobe@fuse.service... Feb 9 08:54:46.964436 systemd[1]: Starting modprobe@loop.service... Feb 9 08:54:46.964456 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 08:54:46.964480 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 08:54:46.964510 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 08:54:46.964528 systemd[1]: Starting systemd-journald.service... Feb 9 08:54:46.964546 systemd[1]: Starting systemd-modules-load.service... Feb 9 08:54:46.964567 systemd[1]: Starting systemd-network-generator.service... Feb 9 08:54:46.964588 systemd[1]: Starting systemd-remount-fs.service... Feb 9 08:54:46.964610 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 08:54:46.964629 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:54:46.964649 systemd[1]: Mounted dev-hugepages.mount. Feb 9 08:54:46.964667 systemd[1]: Mounted dev-mqueue.mount. Feb 9 08:54:46.964685 systemd[1]: Mounted media.mount. Feb 9 08:54:46.964705 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 08:54:46.970848 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 08:54:46.970889 systemd[1]: Mounted tmp.mount. Feb 9 08:54:46.970918 systemd[1]: Finished kmod-static-nodes.service. Feb 9 08:54:46.970939 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 08:54:46.970978 systemd[1]: Finished modprobe@configfs.service. Feb 9 08:54:46.970996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 08:54:46.971015 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 08:54:46.971033 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 08:54:46.971051 systemd[1]: Finished modprobe@drm.service. Feb 9 08:54:46.971071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 08:54:46.971090 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 08:54:46.971113 systemd[1]: Finished systemd-modules-load.service. Feb 9 08:54:46.971133 kernel: loop: module loaded Feb 9 08:54:46.971153 systemd[1]: Finished systemd-network-generator.service. Feb 9 08:54:46.971174 systemd[1]: Finished systemd-remount-fs.service. Feb 9 08:54:46.971192 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 08:54:46.971214 systemd[1]: Finished modprobe@loop.service. Feb 9 08:54:46.971233 systemd[1]: Reached target network-pre.target. Feb 9 08:54:46.971252 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 08:54:46.971274 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 08:54:46.971293 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 08:54:46.971314 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 08:54:46.971333 systemd[1]: Starting systemd-random-seed.service... Feb 9 08:54:46.971354 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 08:54:46.971371 systemd[1]: Starting systemd-sysctl.service... Feb 9 08:54:46.971393 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 08:54:46.971420 systemd-journald[988]: Journal started Feb 9 08:54:46.971507 systemd-journald[988]: Runtime Journal (/run/log/journal/70ff895dffad427aadf8ca85689e7060) is 4.9M, max 39.5M, 34.5M free. Feb 9 08:54:46.747000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 08:54:46.747000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 08:54:46.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.990619 systemd[1]: Started systemd-journald.service. Feb 9 08:54:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.992078 systemd-journald[988]: Time spent on flushing to /var/log/journal/70ff895dffad427aadf8ca85689e7060 is 34.911ms for 1101 entries. Feb 9 08:54:46.992078 systemd-journald[988]: System Journal (/var/log/journal/70ff895dffad427aadf8ca85689e7060) is 8.0M, max 195.6M, 187.6M free. Feb 9 08:54:47.063133 systemd-journald[988]: Received client request to flush runtime journal. Feb 9 08:54:47.063177 kernel: fuse: init (API version 7.34) Feb 9 08:54:46.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.935000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 08:54:46.935000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff8f9ce2f0 a2=4000 a3=7fff8f9ce38c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:46.935000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 08:54:46.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:46.978229 systemd[1]: Starting systemd-journal-flush.service... Feb 9 08:54:46.999986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 08:54:47.000222 systemd[1]: Finished modprobe@fuse.service. Feb 9 08:54:47.001211 systemd[1]: Finished systemd-random-seed.service. Feb 9 08:54:47.001894 systemd[1]: Reached target first-boot-complete.target. Feb 9 08:54:47.005304 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 08:54:47.011055 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 08:54:47.032109 systemd[1]: Finished systemd-sysctl.service. Feb 9 08:54:47.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.074542 systemd[1]: Finished systemd-journal-flush.service. Feb 9 08:54:47.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.099826 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 08:54:47.101682 systemd[1]: Starting systemd-udev-settle.service... Feb 9 08:54:47.113896 udevadm[1045]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 08:54:47.123281 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 08:54:47.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.125290 systemd[1]: Starting systemd-sysusers.service... Feb 9 08:54:47.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.159995 systemd[1]: Finished systemd-sysusers.service. Feb 9 08:54:47.161974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 08:54:47.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.199700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 08:54:47.626712 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 08:54:47.628611 systemd[1]: Starting systemd-udevd.service... Feb 9 08:54:47.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.652280 systemd-udevd[1056]: Using default interface naming scheme 'v252'. Feb 9 08:54:47.684708 systemd[1]: Started systemd-udevd.service. Feb 9 08:54:47.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.687238 systemd[1]: Starting systemd-networkd.service... Feb 9 08:54:47.696539 systemd[1]: Starting systemd-userdbd.service... Feb 9 08:54:47.740016 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:54:47.740205 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 08:54:47.741527 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 08:54:47.744059 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 08:54:47.745656 systemd[1]: Starting modprobe@loop.service... Feb 9 08:54:47.746254 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 08:54:47.746336 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 08:54:47.746442 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 08:54:47.746918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 08:54:47.747144 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 08:54:47.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.748978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 08:54:47.749150 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 08:54:47.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.752219 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 08:54:47.752440 systemd[1]: Finished modprobe@loop.service. Feb 9 08:54:47.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.761286 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 08:54:47.761367 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 08:54:47.768202 systemd[1]: Started systemd-userdbd.service. Feb 9 08:54:47.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.824309 systemd[1]: Found device dev-ttyS0.device. Feb 9 08:54:47.859952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 08:54:47.879258 systemd-networkd[1059]: lo: Link UP Feb 9 08:54:47.879697 systemd-networkd[1059]: lo: Gained carrier Feb 9 08:54:47.880314 systemd-networkd[1059]: Enumeration completed Feb 9 08:54:47.880479 systemd[1]: Started systemd-networkd.service. Feb 9 08:54:47.880646 systemd-networkd[1059]: eth1: Configuring with /run/systemd/network/10-ae:cd:a8:de:b2:c6.network. Feb 9 08:54:47.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:47.882281 systemd-networkd[1059]: eth0: Configuring with /run/systemd/network/10-ce:c7:f4:e2:b4:a8.network. Feb 9 08:54:47.883121 systemd-networkd[1059]: eth1: Link UP Feb 9 08:54:47.883206 systemd-networkd[1059]: eth1: Gained carrier Feb 9 08:54:47.888172 systemd-networkd[1059]: eth0: Link UP Feb 9 08:54:47.888183 systemd-networkd[1059]: eth0: Gained carrier Feb 9 08:54:47.921744 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 08:54:47.926877 kernel: kauditd_printk_skb: 88 callbacks suppressed Feb 9 08:54:47.927023 kernel: audit: type=1400 audit(1707468887.912:128): avc: denied { confidentiality } for pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 08:54:47.912000 audit[1066]: AVC avc: denied { confidentiality } for pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 08:54:47.944422 kernel: audit: type=1300 audit(1707468887.912:128): arch=c000003e syscall=175 success=yes exit=0 a0=55ebaeb62530 a1=32194 a2=7fce0ee16bc5 a3=5 items=108 ppid=1056 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:47.944558 kernel: ACPI: button: Power Button [PWRF] Feb 9 08:54:47.944587 kernel: audit: type=1307 audit(1707468887.912:128): cwd="/" Feb 9 08:54:47.912000 audit[1066]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ebaeb62530 a1=32194 a2=7fce0ee16bc5 a3=5 items=108 ppid=1056 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:47.912000 audit: CWD cwd="/" Feb 9 08:54:47.955825 kernel: audit: type=1302 audit(1707468887.912:128): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.955938 kernel: audit: type=1302 audit(1707468887.912:128): item=1 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=1 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.965600 kernel: audit: type=1302 audit(1707468887.912:128): item=2 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.965796 kernel: audit: type=1302 audit(1707468887.912:128): item=3 name=(null) inode=14078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=2 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=3 name=(null) inode=14078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.980843 kernel: audit: type=1302 audit(1707468887.912:128): item=4 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.980984 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 08:54:47.981013 kernel: audit: type=1302 audit(1707468887.912:128): item=5 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=4 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=5 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=6 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=7 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=8 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=9 name=(null) inode=14081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=10 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=11 name=(null) inode=14082 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=12 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=13 name=(null) inode=14083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=14 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=15 name=(null) inode=14084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=16 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.992740 kernel: audit: type=1302 audit(1707468887.912:128): item=6 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=17 name=(null) inode=14085 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=18 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=19 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=20 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=21 name=(null) inode=14087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=22 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=23 name=(null) inode=14088 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=24 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=25 name=(null) inode=14089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=26 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=27 name=(null) inode=14090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=28 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=29 name=(null) inode=14091 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=30 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=31 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=32 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=33 name=(null) inode=14093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=34 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=35 name=(null) inode=14094 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=36 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=37 name=(null) inode=14095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=38 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=39 name=(null) inode=14096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=40 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=41 name=(null) inode=14097 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=42 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=43 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=44 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=45 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=46 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=47 name=(null) inode=14100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=48 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=49 name=(null) inode=14101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=50 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=51 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=52 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=53 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=55 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=56 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=57 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=58 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=59 name=(null) inode=14106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=60 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=61 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=62 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=63 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=64 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=65 name=(null) inode=14109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=66 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=67 name=(null) inode=14110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=68 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=69 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=70 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=71 name=(null) inode=14112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=72 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=73 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=74 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=75 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=76 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=77 name=(null) inode=14115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=78 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=79 name=(null) inode=14116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=80 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=81 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=82 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=83 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=84 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=85 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=86 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=87 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=88 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=89 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=90 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=91 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=92 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=93 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=94 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=95 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=96 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=97 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=98 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=99 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=100 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=101 name=(null) inode=14127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=102 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=103 name=(null) inode=14128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=104 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=105 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=106 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PATH item=107 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:54:47.912000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 08:54:47.999744 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 08:54:48.022751 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 08:54:48.151746 kernel: EDAC MC: Ver: 3.0.0 Feb 9 08:54:48.172335 systemd[1]: Finished systemd-udev-settle.service. Feb 9 08:54:48.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.174664 systemd[1]: Starting lvm2-activation-early.service... Feb 9 08:54:48.197496 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 08:54:48.227525 systemd[1]: Finished lvm2-activation-early.service. Feb 9 08:54:48.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.228182 systemd[1]: Reached target cryptsetup.target. Feb 9 08:54:48.229937 systemd[1]: Starting lvm2-activation.service... Feb 9 08:54:48.235755 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 08:54:48.264245 systemd[1]: Finished lvm2-activation.service. Feb 9 08:54:48.264907 systemd[1]: Reached target local-fs-pre.target. Feb 9 08:54:48.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.267000 systemd[1]: Mounting media-configdrive.mount... Feb 9 08:54:48.267495 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 08:54:48.267556 systemd[1]: Reached target machines.target. Feb 9 08:54:48.269296 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 08:54:48.292120 kernel: ISO 9660 Extensions: RRIP_1991A Feb 9 08:54:48.289155 systemd[1]: Mounted media-configdrive.mount. Feb 9 08:54:48.289934 systemd[1]: Reached target local-fs.target. Feb 9 08:54:48.292795 systemd[1]: Starting ldconfig.service... Feb 9 08:54:48.294128 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 08:54:48.294257 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:54:48.296287 systemd[1]: Starting systemd-boot-update.service... Feb 9 08:54:48.298710 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 08:54:48.299554 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 08:54:48.299653 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 08:54:48.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.302421 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 08:54:48.304087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 08:54:48.323760 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Feb 9 08:54:48.325596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 08:54:48.336863 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 08:54:48.338801 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 08:54:48.346002 systemd-tmpfiles[1112]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 08:54:48.433370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 08:54:48.434198 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 08:54:48.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.454014 systemd-fsck[1117]: fsck.fat 4.2 (2021-01-31) Feb 9 08:54:48.454014 systemd-fsck[1117]: /dev/vda1: 789 files, 115332/258078 clusters Feb 9 08:54:48.455321 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 08:54:48.457511 systemd[1]: Mounting boot.mount... Feb 9 08:54:48.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.474603 systemd[1]: Mounted boot.mount. Feb 9 08:54:48.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.499105 systemd[1]: Finished systemd-boot-update.service. Feb 9 08:54:48.582507 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 08:54:48.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.584682 systemd[1]: Starting audit-rules.service... Feb 9 08:54:48.587018 systemd[1]: Starting clean-ca-certificates.service... Feb 9 08:54:48.589352 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 08:54:48.592484 systemd[1]: Starting systemd-resolved.service... Feb 9 08:54:48.601127 systemd[1]: Starting systemd-timesyncd.service... Feb 9 08:54:48.603092 systemd[1]: Starting systemd-update-utmp.service... Feb 9 08:54:48.604293 systemd[1]: Finished clean-ca-certificates.service. Feb 9 08:54:48.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.607856 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 08:54:48.626000 audit[1132]: SYSTEM_BOOT pid=1132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.632024 systemd[1]: Finished systemd-update-utmp.service. Feb 9 08:54:48.668120 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 08:54:48.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.719003 systemd[1]: Started systemd-timesyncd.service. Feb 9 08:54:48.719890 systemd[1]: Reached target time-set.target. Feb 9 08:54:48.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:48.738000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 08:54:48.738000 audit[1150]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefaac9380 a2=420 a3=0 items=0 ppid=1125 pid=1150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:48.738000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 08:54:48.739769 augenrules[1150]: No rules Feb 9 08:54:48.740487 systemd[1]: Finished audit-rules.service. Feb 9 08:54:48.748747 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 08:54:48.753444 systemd-resolved[1128]: Positive Trust Anchors: Feb 9 08:54:48.753471 systemd-resolved[1128]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 08:54:48.753518 systemd-resolved[1128]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 08:54:48.757366 systemd[1]: Finished ldconfig.service. Feb 9 08:54:48.760336 systemd[1]: Starting systemd-update-done.service... Feb 9 08:54:48.764370 systemd-resolved[1128]: Using system hostname 'ci-3510.3.2-6-9c47918d0b'. Feb 9 08:54:48.766489 systemd[1]: Started systemd-resolved.service. Feb 9 08:54:48.767134 systemd[1]: Reached target network.target. Feb 9 08:54:48.767531 systemd[1]: Reached target nss-lookup.target. Feb 9 08:54:48.771452 systemd[1]: Finished systemd-update-done.service. Feb 9 08:54:48.772035 systemd[1]: Reached target sysinit.target. Feb 9 08:54:48.772529 systemd[1]: Started motdgen.path. Feb 9 08:54:48.772937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 08:54:48.773574 systemd[1]: Started logrotate.timer. Feb 9 08:54:48.774058 systemd[1]: Started mdadm.timer. Feb 9 08:54:48.774418 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 08:54:48.774993 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 08:54:48.775024 systemd[1]: Reached target paths.target. Feb 9 08:54:48.775517 systemd[1]: Reached target timers.target. Feb 9 08:54:48.776401 systemd[1]: Listening on dbus.socket. Feb 9 08:54:48.778457 systemd[1]: Starting docker.socket... Feb 9 08:54:48.781037 systemd[1]: Listening on sshd.socket. Feb 9 08:54:48.781789 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:54:48.782299 systemd[1]: Listening on docker.socket. Feb 9 08:54:48.782829 systemd[1]: Reached target sockets.target. Feb 9 08:54:48.783576 systemd[1]: Reached target basic.target. Feb 9 08:54:48.784201 systemd[1]: System is tainted: cgroupsv1 Feb 9 08:54:48.784248 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 08:54:48.784276 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 08:54:48.785550 systemd[1]: Starting containerd.service... Feb 9 08:54:48.787223 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 08:54:48.788959 systemd[1]: Starting dbus.service... Feb 9 08:54:48.790610 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 08:54:48.793331 systemd[1]: Starting extend-filesystems.service... Feb 9 08:54:48.794018 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 08:54:48.803299 systemd[1]: Starting motdgen.service... Feb 9 08:54:48.805671 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 08:54:48.812065 systemd[1]: Starting prepare-critools.service... Feb 9 08:54:48.814217 systemd[1]: Starting prepare-helm.service... Feb 9 08:54:48.819824 jq[1164]: false Feb 9 08:54:48.818134 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 08:54:48.832376 systemd[1]: Starting sshd-keygen.service... Feb 9 08:54:48.835777 systemd[1]: Starting systemd-logind.service... Feb 9 08:54:48.836608 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 08:54:48.836699 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 08:54:48.838482 systemd[1]: Starting update-engine.service... Feb 9 08:54:48.845831 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 08:54:48.853114 jq[1184]: true Feb 9 08:54:48.918392 jq[1191]: true Feb 9 08:54:48.875263 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 08:54:48.918678 tar[1188]: crictl Feb 9 08:54:48.919052 tar[1190]: linux-amd64/helm Feb 9 08:54:48.919302 tar[1186]: ./ Feb 9 08:54:48.919302 tar[1186]: ./macvlan Feb 9 08:54:48.875547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 08:54:48.894274 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 08:54:48.894548 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 08:54:48.950628 dbus-daemon[1163]: [system] SELinux support is enabled Feb 9 08:54:48.951220 systemd[1]: Started dbus.service. Feb 9 08:54:48.954005 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 08:54:48.954043 systemd[1]: Reached target system-config.target. Feb 9 08:54:48.954659 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 08:54:48.956789 systemd[1]: Starting user-configdrive.service... Feb 9 08:54:48.982084 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 08:54:48.982407 systemd[1]: Finished motdgen.service. Feb 9 08:54:49.025993 update_engine[1183]: I0209 08:54:49.024004 1183 main.cc:92] Flatcar Update Engine starting Feb 9 08:54:49.049298 coreos-cloudinit[1212]: 2024/02/09 08:54:49 Checking availability of "cloud-drive" Feb 9 08:54:49.049865 coreos-cloudinit[1212]: 2024/02/09 08:54:49 Fetching user-data from datasource of type "cloud-drive" Feb 9 08:54:49.049865 coreos-cloudinit[1212]: 2024/02/09 08:54:49 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 9 08:54:49.050172 coreos-cloudinit[1212]: 2024/02/09 08:54:49 Fetching meta-data from datasource of type "cloud-drive" Feb 9 08:54:49.050172 coreos-cloudinit[1212]: 2024/02/09 08:54:49 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 9 08:54:49.064275 extend-filesystems[1167]: Found vda Feb 9 08:54:49.065601 extend-filesystems[1167]: Found vda1 Feb 9 08:54:49.067055 extend-filesystems[1167]: Found vda2 Feb 9 08:54:49.072869 systemd[1]: Started update-engine.service. Feb 9 08:54:49.073413 extend-filesystems[1167]: Found vda3 Feb 9 08:54:49.074626 extend-filesystems[1167]: Found usr Feb 9 08:54:49.074626 extend-filesystems[1167]: Found vda4 Feb 9 08:54:49.074626 extend-filesystems[1167]: Found vda6 Feb 9 08:54:49.074626 extend-filesystems[1167]: Found vda7 Feb 9 08:54:49.074626 extend-filesystems[1167]: Found vda9 Feb 9 08:54:49.074626 extend-filesystems[1167]: Checking size of /dev/vda9 Feb 9 08:54:49.075327 systemd[1]: Started locksmithd.service. Feb 9 08:54:49.079154 update_engine[1183]: I0209 08:54:49.075917 1183 update_check_scheduler.cc:74] Next update check in 10m51s Feb 9 08:54:49.103194 coreos-cloudinit[1212]: Detected an Ignition config. Exiting... Feb 9 08:54:49.103181 systemd[1]: Finished user-configdrive.service. Feb 9 08:54:49.103760 systemd[1]: Reached target user-config.target. Feb 9 08:54:49.118916 env[1192]: time="2024-02-09T08:54:49.117976158Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 08:54:49.128216 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Feb 9 08:54:49.128559 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 08:54:49.133572 extend-filesystems[1167]: Resized partition /dev/vda9 Feb 9 08:54:49.147015 extend-filesystems[1242]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 08:54:49.159061 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 9 08:54:49.190070 tar[1186]: ./static Feb 9 08:54:49.247486 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 9 08:54:49.262897 env[1192]: time="2024-02-09T08:54:49.260357897Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 08:54:49.263821 systemd-logind[1182]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 08:54:49.263850 systemd-logind[1182]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 08:54:49.264690 env[1192]: time="2024-02-09T08:54:49.264600845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.265222 systemd-logind[1182]: New seat seat0. Feb 9 08:54:49.268796 systemd[1]: Started systemd-logind.service. Feb 9 08:54:49.270075 env[1192]: time="2024-02-09T08:54:49.269010706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:54:49.270075 env[1192]: time="2024-02-09T08:54:49.269059672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.270169 extend-filesystems[1242]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 08:54:49.270169 extend-filesystems[1242]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 9 08:54:49.270169 extend-filesystems[1242]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 9 08:54:49.280170 coreos-metadata[1162]: Feb 09 08:54:49.275 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 9 08:54:49.283754 env[1192]: time="2024-02-09T08:54:49.282233030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:54:49.283754 env[1192]: time="2024-02-09T08:54:49.282287478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.283754 env[1192]: time="2024-02-09T08:54:49.282312600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 08:54:49.283754 env[1192]: time="2024-02-09T08:54:49.282329720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.287139 env[1192]: time="2024-02-09T08:54:49.284192648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.287294 extend-filesystems[1167]: Resized filesystem in /dev/vda9 Feb 9 08:54:49.287294 extend-filesystems[1167]: Found vdb Feb 9 08:54:49.306960 env[1192]: time="2024-02-09T08:54:49.300804840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 08:54:49.306960 env[1192]: time="2024-02-09T08:54:49.304812447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 08:54:49.306960 env[1192]: time="2024-02-09T08:54:49.304858501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 08:54:49.307106 coreos-metadata[1162]: Feb 09 08:54:49.287 INFO Fetch successful Feb 9 08:54:49.287419 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 08:54:49.287676 systemd[1]: Finished extend-filesystems.service. Feb 9 08:54:49.309616 env[1192]: time="2024-02-09T08:54:49.309334459Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 08:54:49.309616 env[1192]: time="2024-02-09T08:54:49.309393217Z" level=info msg="metadata content store policy set" policy=shared Feb 9 08:54:49.329379 unknown[1162]: wrote ssh authorized keys file for user: core Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.329939767Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.329998265Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330023019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330079263Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330104618Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330125932Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330146384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330171301Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330193792Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330216667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330238460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330260768Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330446616Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 08:54:49.331752 env[1192]: time="2024-02-09T08:54:49.330584878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331110155Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331160395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331180671Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331251270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331274646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331295353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331313581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331331805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331349804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331368153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331389530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331412653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331640282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331665839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.332352 env[1192]: time="2024-02-09T08:54:49.331688970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.335748 env[1192]: time="2024-02-09T08:54:49.331708108Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 08:54:49.335748 env[1192]: time="2024-02-09T08:54:49.334087001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 08:54:49.335748 env[1192]: time="2024-02-09T08:54:49.334114006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 08:54:49.335748 env[1192]: time="2024-02-09T08:54:49.334147172Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 08:54:49.335748 env[1192]: time="2024-02-09T08:54:49.334215731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.334494623Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.334587207Z" level=info msg="Connect containerd service" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.334643415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335405395Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335733514Z" level=info msg="Start subscribing containerd event" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335799707Z" level=info msg="Start recovering state" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335898854Z" level=info msg="Start event monitor" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335915986Z" level=info msg="Start snapshots syncer" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335932469Z" level=info msg="Start cni network conf syncer for default" Feb 9 08:54:49.336051 env[1192]: time="2024-02-09T08:54:49.335946525Z" level=info msg="Start streaming server" Feb 9 08:54:49.354994 env[1192]: time="2024-02-09T08:54:49.354885188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 08:54:49.355176 env[1192]: time="2024-02-09T08:54:49.355125824Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 08:54:49.355353 systemd[1]: Started containerd.service. Feb 9 08:54:49.356347 env[1192]: time="2024-02-09T08:54:49.356200090Z" level=info msg="containerd successfully booted in 0.240794s" Feb 9 08:54:49.370208 update-ssh-keys[1249]: Updated "/home/core/.ssh/authorized_keys" Feb 9 08:54:49.370753 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 08:54:49.393517 tar[1186]: ./vlan Feb 9 08:54:49.511538 tar[1186]: ./portmap Feb 9 08:54:49.613671 tar[1186]: ./host-local Feb 9 08:54:49.699581 tar[1186]: ./vrf Feb 9 08:54:49.801869 systemd-networkd[1059]: eth1: Gained IPv6LL Feb 9 08:54:49.803355 tar[1186]: ./bridge Feb 9 08:54:49.865869 systemd-networkd[1059]: eth0: Gained IPv6LL Feb 9 08:54:49.923696 tar[1186]: ./tuning Feb 9 08:54:50.022187 tar[1186]: ./firewall Feb 9 08:54:50.144520 tar[1186]: ./host-device Feb 9 08:54:50.257553 tar[1186]: ./sbr Feb 9 08:54:50.366998 tar[1186]: ./loopback Feb 9 08:54:50.372875 tar[1190]: linux-amd64/LICENSE Feb 9 08:54:50.373285 tar[1190]: linux-amd64/README.md Feb 9 08:54:50.395488 systemd[1]: Finished prepare-helm.service. Feb 9 08:54:50.461851 tar[1186]: ./dhcp Feb 9 08:54:50.481976 systemd[1]: Finished prepare-critools.service. Feb 9 08:54:50.584638 tar[1186]: ./ptp Feb 9 08:54:50.632376 tar[1186]: ./ipvlan Feb 9 08:54:50.673309 tar[1186]: ./bandwidth Feb 9 08:54:50.731786 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 08:54:50.738564 locksmithd[1235]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 08:54:51.087807 systemd[1]: Created slice system-sshd.slice. Feb 9 08:54:51.115632 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 08:54:51.139161 systemd[1]: Finished sshd-keygen.service. Feb 9 08:54:51.141547 systemd[1]: Starting issuegen.service... Feb 9 08:54:51.143401 systemd[1]: Started sshd@0-143.198.159.117:22-139.178.89.65:38472.service. Feb 9 08:54:51.151708 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 08:54:51.152018 systemd[1]: Finished issuegen.service. Feb 9 08:54:51.154332 systemd[1]: Starting systemd-user-sessions.service... Feb 9 08:54:51.162808 systemd[1]: Finished systemd-user-sessions.service. Feb 9 08:54:51.165045 systemd[1]: Started getty@tty1.service. Feb 9 08:54:51.167140 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 08:54:51.167962 systemd[1]: Reached target getty.target. Feb 9 08:54:51.168908 systemd[1]: Reached target multi-user.target. Feb 9 08:54:51.171123 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 08:54:51.181951 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 08:54:51.182225 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 08:54:51.183040 systemd[1]: Startup finished in 7.485s (kernel) + 7.883s (userspace) = 15.369s. Feb 9 08:54:51.229515 sshd[1278]: Accepted publickey for core from 139.178.89.65 port 38472 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:51.232576 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.245474 systemd[1]: Created slice user-500.slice. Feb 9 08:54:51.246664 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 08:54:51.251939 systemd-logind[1182]: New session 1 of user core. Feb 9 08:54:51.257550 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 08:54:51.259041 systemd[1]: Starting user@500.service... Feb 9 08:54:51.263361 (systemd)[1292]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.343903 systemd[1292]: Queued start job for default target default.target. Feb 9 08:54:51.344781 systemd[1292]: Reached target paths.target. Feb 9 08:54:51.344941 systemd[1292]: Reached target sockets.target. Feb 9 08:54:51.345117 systemd[1292]: Reached target timers.target. Feb 9 08:54:51.345208 systemd[1292]: Reached target basic.target. Feb 9 08:54:51.345431 systemd[1]: Started user@500.service. Feb 9 08:54:51.346571 systemd[1]: Started session-1.scope. Feb 9 08:54:51.347477 systemd[1292]: Reached target default.target. Feb 9 08:54:51.347994 systemd[1292]: Startup finished in 76ms. Feb 9 08:54:51.404947 systemd[1]: Started sshd@1-143.198.159.117:22-139.178.89.65:38480.service. Feb 9 08:54:51.465359 sshd[1302]: Accepted publickey for core from 139.178.89.65 port 38480 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:51.467134 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.472501 systemd-logind[1182]: New session 2 of user core. Feb 9 08:54:51.472970 systemd[1]: Started session-2.scope. Feb 9 08:54:51.539850 sshd[1302]: pam_unix(sshd:session): session closed for user core Feb 9 08:54:51.543502 systemd[1]: sshd@1-143.198.159.117:22-139.178.89.65:38480.service: Deactivated successfully. Feb 9 08:54:51.545452 systemd[1]: Started sshd@2-143.198.159.117:22-139.178.89.65:38490.service. Feb 9 08:54:51.547384 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 08:54:51.548368 systemd-logind[1182]: Session 2 logged out. Waiting for processes to exit. Feb 9 08:54:51.549389 systemd-logind[1182]: Removed session 2. Feb 9 08:54:51.600693 sshd[1309]: Accepted publickey for core from 139.178.89.65 port 38490 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:51.602841 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.609133 systemd[1]: Started session-3.scope. Feb 9 08:54:51.609362 systemd-logind[1182]: New session 3 of user core. Feb 9 08:54:51.668340 sshd[1309]: pam_unix(sshd:session): session closed for user core Feb 9 08:54:51.671922 systemd[1]: Started sshd@3-143.198.159.117:22-139.178.89.65:38506.service. Feb 9 08:54:51.674661 systemd[1]: sshd@2-143.198.159.117:22-139.178.89.65:38490.service: Deactivated successfully. Feb 9 08:54:51.677738 systemd-logind[1182]: Session 3 logged out. Waiting for processes to exit. Feb 9 08:54:51.677906 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 08:54:51.680867 systemd-logind[1182]: Removed session 3. Feb 9 08:54:51.723547 sshd[1314]: Accepted publickey for core from 139.178.89.65 port 38506 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:51.725624 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.730134 systemd-logind[1182]: New session 4 of user core. Feb 9 08:54:51.731100 systemd[1]: Started session-4.scope. Feb 9 08:54:51.793422 sshd[1314]: pam_unix(sshd:session): session closed for user core Feb 9 08:54:51.797320 systemd[1]: sshd@3-143.198.159.117:22-139.178.89.65:38506.service: Deactivated successfully. Feb 9 08:54:51.800365 systemd-logind[1182]: Session 4 logged out. Waiting for processes to exit. Feb 9 08:54:51.802346 systemd[1]: Started sshd@4-143.198.159.117:22-139.178.89.65:38516.service. Feb 9 08:54:51.802851 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 08:54:51.804555 systemd-logind[1182]: Removed session 4. Feb 9 08:54:51.855645 sshd[1323]: Accepted publickey for core from 139.178.89.65 port 38516 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:51.856984 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:51.862185 systemd[1]: Started session-5.scope. Feb 9 08:54:51.862458 systemd-logind[1182]: New session 5 of user core. Feb 9 08:54:51.934156 sudo[1327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 08:54:51.934830 sudo[1327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 08:54:51.942986 dbus-daemon[1163]: ЍauhU: received setenforce notice (enforcing=-224394000) Feb 9 08:54:51.945681 sudo[1327]: pam_unix(sudo:session): session closed for user root Feb 9 08:54:51.951131 sshd[1323]: pam_unix(sshd:session): session closed for user core Feb 9 08:54:51.954810 systemd[1]: Started sshd@5-143.198.159.117:22-139.178.89.65:38530.service. Feb 9 08:54:51.956986 systemd-logind[1182]: Session 5 logged out. Waiting for processes to exit. Feb 9 08:54:51.957347 systemd[1]: sshd@4-143.198.159.117:22-139.178.89.65:38516.service: Deactivated successfully. Feb 9 08:54:51.958484 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 08:54:51.959096 systemd-logind[1182]: Removed session 5. Feb 9 08:54:52.009251 sshd[1329]: Accepted publickey for core from 139.178.89.65 port 38530 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:52.011056 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:52.016470 systemd-logind[1182]: New session 6 of user core. Feb 9 08:54:52.016822 systemd[1]: Started session-6.scope. Feb 9 08:54:52.076636 sudo[1336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 08:54:52.077298 sudo[1336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 08:54:52.081153 sudo[1336]: pam_unix(sudo:session): session closed for user root Feb 9 08:54:52.087613 sudo[1335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 08:54:52.087884 sudo[1335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 08:54:52.099114 systemd[1]: Stopping audit-rules.service... Feb 9 08:54:52.100000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 08:54:52.100000 audit[1339]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc42bce630 a2=420 a3=0 items=0 ppid=1 pid=1339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:52.100000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 08:54:52.101761 auditctl[1339]: No rules Feb 9 08:54:52.102068 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 08:54:52.102327 systemd[1]: Stopped audit-rules.service. Feb 9 08:54:52.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.104387 systemd[1]: Starting audit-rules.service... Feb 9 08:54:52.129002 augenrules[1357]: No rules Feb 9 08:54:52.130224 systemd[1]: Finished audit-rules.service. Feb 9 08:54:52.131336 sudo[1335]: pam_unix(sudo:session): session closed for user root Feb 9 08:54:52.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.130000 audit[1335]: USER_END pid=1335 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.130000 audit[1335]: CRED_DISP pid=1335 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.135677 sshd[1329]: pam_unix(sshd:session): session closed for user core Feb 9 08:54:52.137000 audit[1329]: USER_END pid=1329 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.137000 audit[1329]: CRED_DISP pid=1329 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.159.117:22-139.178.89.65:38534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.138762 systemd[1]: Started sshd@6-143.198.159.117:22-139.178.89.65:38534.service. Feb 9 08:54:52.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-143.198.159.117:22-139.178.89.65:38530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.147470 systemd[1]: sshd@5-143.198.159.117:22-139.178.89.65:38530.service: Deactivated successfully. Feb 9 08:54:52.148448 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 08:54:52.150060 systemd-logind[1182]: Session 6 logged out. Waiting for processes to exit. Feb 9 08:54:52.151041 systemd-logind[1182]: Removed session 6. Feb 9 08:54:52.189000 audit[1362]: USER_ACCT pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.190774 sshd[1362]: Accepted publickey for core from 139.178.89.65 port 38534 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:54:52.191000 audit[1362]: CRED_ACQ pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.191000 audit[1362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb9413340 a2=3 a3=0 items=0 ppid=1 pid=1362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:52.191000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:54:52.192522 sshd[1362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:54:52.201762 systemd[1]: Started session-7.scope. Feb 9 08:54:52.202190 systemd-logind[1182]: New session 7 of user core. Feb 9 08:54:52.206000 audit[1362]: USER_START pid=1362 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.208000 audit[1367]: CRED_ACQ pid=1367 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:54:52.259000 audit[1368]: USER_ACCT pid=1368 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.260858 sudo[1368]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 08:54:52.260000 audit[1368]: CRED_REFR pid=1368 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.261478 sudo[1368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 08:54:52.263000 audit[1368]: USER_START pid=1368 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.810630 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 08:54:52.818459 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 08:54:52.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:52.819586 systemd[1]: Reached target network-online.target. Feb 9 08:54:52.821710 systemd[1]: Starting docker.service... Feb 9 08:54:52.872223 env[1385]: time="2024-02-09T08:54:52.871904203Z" level=info msg="Starting up" Feb 9 08:54:52.874274 env[1385]: time="2024-02-09T08:54:52.874241340Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 08:54:52.874399 env[1385]: time="2024-02-09T08:54:52.874383962Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 08:54:52.874483 env[1385]: time="2024-02-09T08:54:52.874464572Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 08:54:52.874557 env[1385]: time="2024-02-09T08:54:52.874543724Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 08:54:52.876985 env[1385]: time="2024-02-09T08:54:52.876938867Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 08:54:52.876985 env[1385]: time="2024-02-09T08:54:52.876972437Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 08:54:52.877139 env[1385]: time="2024-02-09T08:54:52.876995386Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 08:54:52.877139 env[1385]: time="2024-02-09T08:54:52.877011964Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 08:54:53.014802 env[1385]: time="2024-02-09T08:54:53.014752400Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 08:54:53.014802 env[1385]: time="2024-02-09T08:54:53.014784552Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 08:54:53.015109 env[1385]: time="2024-02-09T08:54:53.015082611Z" level=info msg="Loading containers: start." Feb 9 08:54:53.074000 audit[1416]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.076608 kernel: kauditd_printk_skb: 140 callbacks suppressed Feb 9 08:54:53.076698 kernel: audit: type=1325 audit(1707468893.074:161): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.074000 audit[1416]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffccf8b8470 a2=0 a3=7ffccf8b845c items=0 ppid=1385 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.083231 kernel: audit: type=1300 audit(1707468893.074:161): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffccf8b8470 a2=0 a3=7ffccf8b845c items=0 ppid=1385 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.083354 kernel: audit: type=1327 audit(1707468893.074:161): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 08:54:53.074000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 08:54:53.085137 kernel: audit: type=1325 audit(1707468893.084:162): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.084000 audit[1418]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.084000 audit[1418]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6dd50ce0 a2=0 a3=7ffd6dd50ccc items=0 ppid=1385 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.092165 kernel: audit: type=1300 audit(1707468893.084:162): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6dd50ce0 a2=0 a3=7ffd6dd50ccc items=0 ppid=1385 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.092317 kernel: audit: type=1327 audit(1707468893.084:162): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 08:54:53.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 08:54:53.088000 audit[1420]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.096972 kernel: audit: type=1325 audit(1707468893.088:163): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.097081 kernel: audit: type=1300 audit(1707468893.088:163): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeeca44b70 a2=0 a3=7ffeeca44b5c items=0 ppid=1385 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.088000 audit[1420]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeeca44b70 a2=0 a3=7ffeeca44b5c items=0 ppid=1385 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.101568 kernel: audit: type=1327 audit(1707468893.088:163): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 08:54:53.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 08:54:53.104174 kernel: audit: type=1325 audit(1707468893.092:164): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.092000 audit[1422]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.092000 audit[1422]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff7f9513b0 a2=0 a3=7fff7f95139c items=0 ppid=1385 pid=1422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.092000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 08:54:53.098000 audit[1424]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.098000 audit[1424]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe8b8fb580 a2=0 a3=7ffe8b8fb56c items=0 ppid=1385 pid=1424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.098000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 08:54:53.119000 audit[1429]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.119000 audit[1429]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc53f260c0 a2=0 a3=7ffc53f260ac items=0 ppid=1385 pid=1429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.119000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 08:54:53.135000 audit[1431]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.135000 audit[1431]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe757e7c90 a2=0 a3=7ffe757e7c7c items=0 ppid=1385 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.135000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 08:54:53.138000 audit[1433]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.138000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffca3c00280 a2=0 a3=7ffca3c0026c items=0 ppid=1385 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 08:54:53.140000 audit[1435]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.140000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd058bc370 a2=0 a3=7ffd058bc35c items=0 ppid=1385 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.140000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 08:54:53.154000 audit[1439]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.154000 audit[1439]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc8aa6fa40 a2=0 a3=7ffc8aa6fa2c items=0 ppid=1385 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.154000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 08:54:53.155000 audit[1440]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.155000 audit[1440]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe4fd2a530 a2=0 a3=7ffe4fd2a51c items=0 ppid=1385 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.155000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 08:54:53.166764 kernel: Initializing XFRM netlink socket Feb 9 08:54:53.211495 env[1385]: time="2024-02-09T08:54:53.211431080Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 08:54:53.252000 audit[1448]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.252000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffee3d0e350 a2=0 a3=7ffee3d0e33c items=0 ppid=1385 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.252000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 08:54:53.265000 audit[1451]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.265000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe540f92d0 a2=0 a3=7ffe540f92bc items=0 ppid=1385 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 08:54:53.269000 audit[1454]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.269000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc2c7c3ac0 a2=0 a3=7ffc2c7c3aac items=0 ppid=1385 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.269000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 08:54:53.272000 audit[1456]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.272000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc795179f0 a2=0 a3=7ffc795179dc items=0 ppid=1385 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.272000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 08:54:53.274000 audit[1458]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.274000 audit[1458]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdafdb5670 a2=0 a3=7ffdafdb565c items=0 ppid=1385 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 08:54:53.277000 audit[1460]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.277000 audit[1460]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe393a34d0 a2=0 a3=7ffe393a34bc items=0 ppid=1385 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.277000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 08:54:53.280000 audit[1462]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.280000 audit[1462]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd2dd1c8b0 a2=0 a3=7ffd2dd1c89c items=0 ppid=1385 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 08:54:53.292000 audit[1465]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.292000 audit[1465]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc828fb880 a2=0 a3=7ffc828fb86c items=0 ppid=1385 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.292000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 08:54:53.295000 audit[1467]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.295000 audit[1467]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffcacf9d7c0 a2=0 a3=7ffcacf9d7ac items=0 ppid=1385 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.295000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 08:54:53.297000 audit[1469]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.297000 audit[1469]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd340ca0d0 a2=0 a3=7ffd340ca0bc items=0 ppid=1385 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.297000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 08:54:53.300000 audit[1471]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.300000 audit[1471]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc683b4140 a2=0 a3=7ffc683b412c items=0 ppid=1385 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.300000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 08:54:53.301607 systemd-networkd[1059]: docker0: Link UP Feb 9 08:54:53.313000 audit[1475]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.313000 audit[1475]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff45de0230 a2=0 a3=7fff45de021c items=0 ppid=1385 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.313000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 08:54:53.315000 audit[1476]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:54:53.315000 audit[1476]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd18fd8aa0 a2=0 a3=7ffd18fd8a8c items=0 ppid=1385 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:54:53.315000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 08:54:53.316439 env[1385]: time="2024-02-09T08:54:53.316403173Z" level=info msg="Loading containers: done." Feb 9 08:54:53.332625 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3782468902-merged.mount: Deactivated successfully. Feb 9 08:54:53.342592 env[1385]: time="2024-02-09T08:54:53.342510853Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 08:54:53.342810 env[1385]: time="2024-02-09T08:54:53.342782502Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 08:54:53.343015 env[1385]: time="2024-02-09T08:54:53.342988561Z" level=info msg="Daemon has completed initialization" Feb 9 08:54:53.371897 systemd[1]: Started docker.service. Feb 9 08:54:53.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:53.380998 env[1385]: time="2024-02-09T08:54:53.380937031Z" level=info msg="API listen on /run/docker.sock" Feb 9 08:54:53.401886 systemd[1]: Starting coreos-metadata.service... Feb 9 08:54:53.440682 coreos-metadata[1503]: Feb 09 08:54:53.440 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 9 08:54:53.450198 coreos-metadata[1503]: Feb 09 08:54:53.450 INFO Fetch successful Feb 9 08:54:53.462681 systemd[1]: Finished coreos-metadata.service. Feb 9 08:54:53.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:53.480419 systemd[1]: Reloading. Feb 9 08:54:53.557007 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-09T08:54:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:54:53.557036 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-09T08:54:53Z" level=info msg="torcx already run" Feb 9 08:54:53.644622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:54:53.644644 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:54:53.661935 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:54:53.736497 systemd[1]: Started kubelet.service. Feb 9 08:54:53.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:54:53.812786 kubelet[1591]: E0209 08:54:53.811745 1591 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 08:54:53.815162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 08:54:53.815449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 08:54:53.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 08:54:54.282873 env[1192]: time="2024-02-09T08:54:54.282806350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 08:54:54.936705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468408972.mount: Deactivated successfully. Feb 9 08:54:57.070787 env[1192]: time="2024-02-09T08:54:57.070728965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:54:57.073554 env[1192]: time="2024-02-09T08:54:57.073507334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:54:57.075975 env[1192]: time="2024-02-09T08:54:57.075933090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:54:57.077589 env[1192]: time="2024-02-09T08:54:57.077556914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:54:57.078360 env[1192]: time="2024-02-09T08:54:57.078326928Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 08:54:57.091774 env[1192]: time="2024-02-09T08:54:57.091712936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 08:54:58.796762 systemd-timesyncd[1131]: Timed out waiting for reply from 173.71.73.214:123 (0.flatcar.pool.ntp.org). Feb 9 08:54:59.532750 systemd-resolved[1128]: Clock change detected. Flushing caches. Feb 9 08:54:59.533459 systemd-timesyncd[1131]: Contacted time server 99.119.214.210:123 (0.flatcar.pool.ntp.org). Feb 9 08:54:59.533898 systemd-timesyncd[1131]: Initial clock synchronization to Fri 2024-02-09 08:54:59.532592 UTC. Feb 9 08:55:00.132371 env[1192]: time="2024-02-09T08:55:00.132305436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:00.135850 env[1192]: time="2024-02-09T08:55:00.135805192Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:00.138765 env[1192]: time="2024-02-09T08:55:00.138714181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:00.142432 env[1192]: time="2024-02-09T08:55:00.142376395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:00.144191 env[1192]: time="2024-02-09T08:55:00.143612169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 08:55:00.162406 env[1192]: time="2024-02-09T08:55:00.162350336Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 08:55:01.669636 env[1192]: time="2024-02-09T08:55:01.669548506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:01.671939 env[1192]: time="2024-02-09T08:55:01.671891935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:01.675801 env[1192]: time="2024-02-09T08:55:01.675739216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:01.679381 env[1192]: time="2024-02-09T08:55:01.679316237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:01.680687 env[1192]: time="2024-02-09T08:55:01.680638097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 08:55:01.703293 env[1192]: time="2024-02-09T08:55:01.703245680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 08:55:02.971356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029184361.mount: Deactivated successfully. Feb 9 08:55:03.663197 env[1192]: time="2024-02-09T08:55:03.663112237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:03.665134 env[1192]: time="2024-02-09T08:55:03.665088294Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:03.667673 env[1192]: time="2024-02-09T08:55:03.667631540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:03.669172 env[1192]: time="2024-02-09T08:55:03.669132743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:03.669809 env[1192]: time="2024-02-09T08:55:03.669765299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 08:55:03.683823 env[1192]: time="2024-02-09T08:55:03.683784017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 08:55:04.224177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234586930.mount: Deactivated successfully. Feb 9 08:55:04.233270 env[1192]: time="2024-02-09T08:55:04.233174203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:04.236503 env[1192]: time="2024-02-09T08:55:04.236441772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:04.239761 env[1192]: time="2024-02-09T08:55:04.239577960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:04.242371 env[1192]: time="2024-02-09T08:55:04.242300251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:04.243377 env[1192]: time="2024-02-09T08:55:04.243324215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 08:55:04.257369 env[1192]: time="2024-02-09T08:55:04.257313651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 08:55:04.726145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 08:55:04.726347 systemd[1]: Stopped kubelet.service. Feb 9 08:55:04.734340 kernel: kauditd_printk_skb: 66 callbacks suppressed Feb 9 08:55:04.734421 kernel: audit: type=1130 audit(1707468904.724:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.734448 kernel: audit: type=1131 audit(1707468904.724:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.728630 systemd[1]: Started kubelet.service. Feb 9 08:55:04.738588 kernel: audit: type=1130 audit(1707468904.726:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:04.809239 kubelet[1638]: E0209 08:55:04.809162 1638 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 08:55:04.815376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 08:55:04.815640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 08:55:04.819625 kernel: audit: type=1131 audit(1707468904.814:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 08:55:04.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 08:55:05.161113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042450200.mount: Deactivated successfully. Feb 9 08:55:09.991361 env[1192]: time="2024-02-09T08:55:09.991306938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:09.995257 env[1192]: time="2024-02-09T08:55:09.995200284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:09.998694 env[1192]: time="2024-02-09T08:55:09.998635067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:10.002065 env[1192]: time="2024-02-09T08:55:10.002012800Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:10.005239 env[1192]: time="2024-02-09T08:55:10.005163127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 08:55:10.019964 env[1192]: time="2024-02-09T08:55:10.019891887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 08:55:10.759470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411352703.mount: Deactivated successfully. Feb 9 08:55:11.372286 env[1192]: time="2024-02-09T08:55:11.372224881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:11.374304 env[1192]: time="2024-02-09T08:55:11.374258108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:11.375806 env[1192]: time="2024-02-09T08:55:11.375771164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:11.377171 env[1192]: time="2024-02-09T08:55:11.377134485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:11.377886 env[1192]: time="2024-02-09T08:55:11.377846276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 08:55:14.976248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 08:55:14.976515 systemd[1]: Stopped kubelet.service. Feb 9 08:55:14.978511 systemd[1]: Started kubelet.service. Feb 9 08:55:14.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:14.982590 kernel: audit: type=1130 audit(1707468914.974:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:14.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:14.986592 kernel: audit: type=1131 audit(1707468914.975:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:14.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:14.996589 kernel: audit: type=1130 audit(1707468914.981:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.062927 kubelet[1704]: E0209 08:55:15.062864 1704 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 08:55:15.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 08:55:15.064829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 08:55:15.065024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 08:55:15.069613 kernel: audit: type=1131 audit(1707468915.063:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 08:55:15.601178 systemd[1]: Stopped kubelet.service. Feb 9 08:55:15.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.608968 kernel: audit: type=1130 audit(1707468915.599:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.609133 kernel: audit: type=1131 audit(1707468915.599:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.629637 systemd[1]: Reloading. Feb 9 08:55:15.698903 /usr/lib/systemd/system-generators/torcx-generator[1734]: time="2024-02-09T08:55:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:55:15.698933 /usr/lib/systemd/system-generators/torcx-generator[1734]: time="2024-02-09T08:55:15Z" level=info msg="torcx already run" Feb 9 08:55:15.804199 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:55:15.804397 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:55:15.822053 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:55:15.932917 systemd[1]: Started kubelet.service. Feb 9 08:55:15.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.938599 kernel: audit: type=1130 audit(1707468915.931:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:15.999208 kubelet[1787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 08:55:15.999208 kubelet[1787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:55:15.999704 kubelet[1787]: I0209 08:55:15.999259 1787 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 08:55:16.000851 kubelet[1787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 08:55:16.000851 kubelet[1787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:55:16.516950 kubelet[1787]: I0209 08:55:16.516911 1787 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 08:55:16.517198 kubelet[1787]: I0209 08:55:16.517183 1787 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 08:55:16.517581 kubelet[1787]: I0209 08:55:16.517544 1787 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 08:55:16.524041 kubelet[1787]: I0209 08:55:16.523998 1787 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 08:55:16.525308 kubelet[1787]: E0209 08:55:16.525277 1787 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.159.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.527323 kubelet[1787]: I0209 08:55:16.527267 1787 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 08:55:16.527777 kubelet[1787]: I0209 08:55:16.527755 1787 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 08:55:16.527868 kubelet[1787]: I0209 08:55:16.527853 1787 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 08:55:16.527991 kubelet[1787]: I0209 08:55:16.527883 1787 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 08:55:16.527991 kubelet[1787]: I0209 08:55:16.527900 1787 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 08:55:16.528084 kubelet[1787]: I0209 08:55:16.528056 1787 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:55:16.531614 kubelet[1787]: I0209 08:55:16.531585 1787 kubelet.go:398] "Attempting to sync node with API server" Feb 9 08:55:16.531614 kubelet[1787]: I0209 08:55:16.531612 1787 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 08:55:16.531791 kubelet[1787]: I0209 08:55:16.531636 1787 kubelet.go:297] "Adding apiserver pod source" Feb 9 08:55:16.531791 kubelet[1787]: I0209 08:55:16.531653 1787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 08:55:16.535267 kubelet[1787]: W0209 08:55:16.535214 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.198.159.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-6-9c47918d0b&limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.535458 kubelet[1787]: E0209 08:55:16.535443 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.159.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-6-9c47918d0b&limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.535684 kubelet[1787]: I0209 08:55:16.535665 1787 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 08:55:16.536055 kubelet[1787]: W0209 08:55:16.536041 1787 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 08:55:16.536623 kubelet[1787]: I0209 08:55:16.536604 1787 server.go:1186] "Started kubelet" Feb 9 08:55:16.537234 kubelet[1787]: W0209 08:55:16.537181 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.198.159.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.537300 kubelet[1787]: E0209 08:55:16.537242 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.159.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.537429 kubelet[1787]: E0209 08:55:16.537340 1787 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-6-9c47918d0b.17b225eedb2f168a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-6-9c47918d0b", UID:"ci-3510.3.2-6-9c47918d0b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-6-9c47918d0b"}, FirstTimestamp:time.Date(2024, time.February, 9, 8, 55, 16, 536579722, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 8, 55, 16, 536579722, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://143.198.159.117:6443/api/v1/namespaces/default/events": dial tcp 143.198.159.117:6443: connect: connection refused'(may retry after sleeping) Feb 9 08:55:16.537585 kubelet[1787]: I0209 08:55:16.537458 1787 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 08:55:16.538137 kubelet[1787]: I0209 08:55:16.538116 1787 server.go:451] "Adding debug handlers to kubelet server" Feb 9 08:55:16.539706 kubelet[1787]: E0209 08:55:16.539683 1787 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 08:55:16.539796 kubelet[1787]: E0209 08:55:16.539716 1787 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 08:55:16.540915 kubelet[1787]: I0209 08:55:16.540896 1787 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 08:55:16.539000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:16.541105 kubelet[1787]: I0209 08:55:16.541093 1787 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 08:55:16.541239 kubelet[1787]: I0209 08:55:16.541229 1787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 08:55:16.543071 kubelet[1787]: I0209 08:55:16.543055 1787 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 08:55:16.543221 kubelet[1787]: I0209 08:55:16.543206 1787 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 08:55:16.539000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:16.546094 kernel: audit: type=1400 audit(1707468916.539:200): avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:16.546179 kernel: audit: type=1401 audit(1707468916.539:200): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:16.546205 kernel: audit: type=1300 audit(1707468916.539:200): arch=c000003e syscall=188 success=no exit=-22 a0=c000467ef0 a1=c000da2360 a2=c000467ec0 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.539000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000467ef0 a1=c000da2360 a2=c000467ec0 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.550084 kubelet[1787]: E0209 08:55:16.550053 1787 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://143.198.159.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-6-9c47918d0b?timeout=10s": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.550402 kubelet[1787]: W0209 08:55:16.550347 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.198.159.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.550538 kubelet[1787]: E0209 08:55:16.550523 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.159.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.539000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:16.539000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:16.539000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:16.539000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b49960 a1=c000da2378 a2=c000467f80 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.539000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:16.552000 audit[1797]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.552000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcd92f0c00 a2=0 a3=7ffcd92f0bec items=0 ppid=1787 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 08:55:16.576000 audit[1800]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.576000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc191b1fe0 a2=0 a3=7ffc191b1fcc items=0 ppid=1787 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 08:55:16.580000 audit[1803]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.580000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2711d370 a2=0 a3=7ffd2711d35c items=0 ppid=1787 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 08:55:16.586000 audit[1805]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.586000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc47883f00 a2=0 a3=7ffc47883eec items=0 ppid=1787 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 08:55:16.594000 audit[1808]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.594000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe475798f0 a2=0 a3=7ffe475798dc items=0 ppid=1787 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 08:55:16.596000 audit[1809]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.596000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc99976fb0 a2=0 a3=7ffc99976f9c items=0 ppid=1787 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 08:55:16.604000 audit[1813]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.604000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd76055580 a2=0 a3=7ffd7605556c items=0 ppid=1787 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 08:55:16.608037 kubelet[1787]: I0209 08:55:16.608004 1787 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 08:55:16.608037 kubelet[1787]: I0209 08:55:16.608029 1787 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 08:55:16.608189 kubelet[1787]: I0209 08:55:16.608050 1787 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:55:16.611549 kubelet[1787]: I0209 08:55:16.611513 1787 policy_none.go:49] "None policy: Start" Feb 9 08:55:16.612639 kubelet[1787]: I0209 08:55:16.612612 1787 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 08:55:16.612639 kubelet[1787]: I0209 08:55:16.612644 1787 state_mem.go:35] "Initializing new in-memory state store" Feb 9 08:55:16.612000 audit[1816]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.612000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff60858910 a2=0 a3=7fff608588fc items=0 ppid=1787 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 08:55:16.614000 audit[1817]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.614000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcbd7131f0 a2=0 a3=7ffcbd7131dc items=0 ppid=1787 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 08:55:16.620388 kubelet[1787]: I0209 08:55:16.620348 1787 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 08:55:16.618000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:16.618000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:16.618000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e2cf60 a1=c000fe3e78 a2=c000e2cf30 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.618000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:16.620739 kubelet[1787]: I0209 08:55:16.620450 1787 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 08:55:16.620981 kubelet[1787]: I0209 08:55:16.620962 1787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 08:55:16.623000 audit[1818]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.623000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6f4b3590 a2=0 a3=7fff6f4b357c items=0 ppid=1787 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 08:55:16.626301 kubelet[1787]: E0209 08:55:16.626281 1787 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-6-9c47918d0b\" not found" Feb 9 08:55:16.628000 audit[1820]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.628000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe824a0be0 a2=0 a3=7ffe824a0bcc items=0 ppid=1787 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 08:55:16.632000 audit[1822]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.632000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc1b727470 a2=0 a3=7ffc1b72745c items=0 ppid=1787 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 08:55:16.636000 audit[1824]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.636000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffea164f460 a2=0 a3=7ffea164f44c items=0 ppid=1787 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 08:55:16.640000 audit[1826]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.640000 audit[1826]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffee2363580 a2=0 a3=7ffee236356c items=0 ppid=1787 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.640000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 08:55:16.644097 kubelet[1787]: I0209 08:55:16.644073 1787 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.644767 kubelet[1787]: E0209 08:55:16.644751 1787 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.159.117:6443/api/v1/nodes\": dial tcp 143.198.159.117:6443: connect: connection refused" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.644000 audit[1828]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.644000 audit[1828]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe8ace4400 a2=0 a3=7ffe8ace43ec items=0 ppid=1787 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 08:55:16.646772 kubelet[1787]: I0209 08:55:16.646755 1787 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 08:55:16.646000 audit[1830]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.646000 audit[1829]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.646000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe787d27b0 a2=0 a3=7ffe787d279c items=0 ppid=1787 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 08:55:16.646000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd43881d00 a2=0 a3=7ffd43881cec items=0 ppid=1787 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 08:55:16.648000 audit[1832]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.648000 audit[1831]: NETFILTER_CFG table=nat:44 family=10 entries=2 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.648000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc35177290 a2=0 a3=7ffc3517727c items=0 ppid=1787 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.648000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe34677790 a2=0 a3=10e3 items=0 ppid=1787 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 08:55:16.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 08:55:16.650000 audit[1834]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:16.650000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4a327e60 a2=0 a3=7ffe4a327e4c items=0 ppid=1787 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 08:55:16.653000 audit[1835]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.653000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffca85b0160 a2=0 a3=7ffca85b014c items=0 ppid=1787 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 08:55:16.654000 audit[1836]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.654000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe3c2e5f00 a2=0 a3=7ffe3c2e5eec items=0 ppid=1787 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 08:55:16.658000 audit[1838]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.658000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff2e200550 a2=0 a3=7fff2e20053c items=0 ppid=1787 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 08:55:16.660000 audit[1839]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.660000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3a626450 a2=0 a3=7ffd3a62643c items=0 ppid=1787 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 08:55:16.663000 audit[1840]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.663000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb326bf10 a2=0 a3=7ffeb326befc items=0 ppid=1787 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 08:55:16.666000 audit[1842]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.666000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd77b65af0 a2=0 a3=7ffd77b65adc items=0 ppid=1787 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 08:55:16.669000 audit[1844]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.669000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff72ecf0e0 a2=0 a3=7fff72ecf0cc items=0 ppid=1787 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 08:55:16.673000 audit[1846]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.673000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffde3a05da0 a2=0 a3=7ffde3a05d8c items=0 ppid=1787 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 08:55:16.676000 audit[1848]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.676000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff0a67cfe0 a2=0 a3=7fff0a67cfcc items=0 ppid=1787 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 08:55:16.681000 audit[1850]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.681000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffc22c0a3b0 a2=0 a3=7ffc22c0a39c items=0 ppid=1787 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 08:55:16.684192 kubelet[1787]: I0209 08:55:16.684153 1787 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 08:55:16.684327 kubelet[1787]: I0209 08:55:16.684316 1787 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 08:55:16.684424 kubelet[1787]: I0209 08:55:16.684414 1787 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 08:55:16.684600 kubelet[1787]: E0209 08:55:16.684590 1787 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 08:55:16.685906 kubelet[1787]: W0209 08:55:16.685850 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.198.159.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.685906 kubelet[1787]: E0209 08:55:16.685911 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.159.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.684000 audit[1851]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.684000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca49aab20 a2=0 a3=7ffca49aab0c items=0 ppid=1787 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 08:55:16.686000 audit[1853]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.686000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1de49850 a2=0 a3=7ffd1de4983c items=0 ppid=1787 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 08:55:16.688000 audit[1854]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:16.688000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc81f6f970 a2=0 a3=7ffc81f6f95c items=0 ppid=1787 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:16.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 08:55:16.751326 kubelet[1787]: E0209 08:55:16.751271 1787 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://143.198.159.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-6-9c47918d0b?timeout=10s": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:16.785947 kubelet[1787]: I0209 08:55:16.784856 1787 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:16.788857 kubelet[1787]: I0209 08:55:16.787935 1787 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:16.789913 kubelet[1787]: I0209 08:55:16.789881 1787 status_manager.go:698] "Failed to get status for pod" podUID=9625b35f31936332fe19372e406be734 pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" err="Get \"https://143.198.159.117:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-6-9c47918d0b\": dial tcp 143.198.159.117:6443: connect: connection refused" Feb 9 08:55:16.790071 kubelet[1787]: I0209 08:55:16.790052 1787 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:16.794081 kubelet[1787]: I0209 08:55:16.794043 1787 status_manager.go:698] "Failed to get status for pod" podUID=321d96fe837fb20a67154c8af8e2c6bb pod="kube-system/kube-scheduler-ci-3510.3.2-6-9c47918d0b" err="Get \"https://143.198.159.117:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-6-9c47918d0b\": dial tcp 143.198.159.117:6443: connect: connection refused" Feb 9 08:55:16.796785 kubelet[1787]: I0209 08:55:16.796757 1787 status_manager.go:698] "Failed to get status for pod" podUID=517e1392fe4f0f6bdbee037e3e602ab2 pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" err="Get \"https://143.198.159.117:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-6-9c47918d0b\": dial tcp 143.198.159.117:6443: connect: connection refused" Feb 9 08:55:16.846146 kubelet[1787]: I0209 08:55:16.846120 1787 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.846731 kubelet[1787]: E0209 08:55:16.846710 1787 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.159.117:6443/api/v1/nodes\": dial tcp 143.198.159.117:6443: connect: connection refused" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.846934 kubelet[1787]: I0209 08:55:16.846916 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847180 kubelet[1787]: I0209 08:55:16.847164 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847309 kubelet[1787]: I0209 08:55:16.847298 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/321d96fe837fb20a67154c8af8e2c6bb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-6-9c47918d0b\" (UID: \"321d96fe837fb20a67154c8af8e2c6bb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847435 kubelet[1787]: I0209 08:55:16.847424 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847552 kubelet[1787]: I0209 08:55:16.847542 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847707 kubelet[1787]: I0209 08:55:16.847695 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.847833 kubelet[1787]: I0209 08:55:16.847823 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.848004 kubelet[1787]: I0209 08:55:16.847994 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:16.848108 kubelet[1787]: I0209 08:55:16.848099 1787 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:17.093316 kubelet[1787]: E0209 08:55:17.093183 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.094412 kubelet[1787]: E0209 08:55:17.094391 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.095141 env[1192]: time="2024-02-09T08:55:17.095070787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-6-9c47918d0b,Uid:9625b35f31936332fe19372e406be734,Namespace:kube-system,Attempt:0,}" Feb 9 08:55:17.095986 env[1192]: time="2024-02-09T08:55:17.095938510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-6-9c47918d0b,Uid:321d96fe837fb20a67154c8af8e2c6bb,Namespace:kube-system,Attempt:0,}" Feb 9 08:55:17.096940 kubelet[1787]: E0209 08:55:17.096918 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.097512 env[1192]: time="2024-02-09T08:55:17.097418459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-6-9c47918d0b,Uid:517e1392fe4f0f6bdbee037e3e602ab2,Namespace:kube-system,Attempt:0,}" Feb 9 08:55:17.153297 kubelet[1787]: E0209 08:55:17.152877 1787 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://143.198.159.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-6-9c47918d0b?timeout=10s": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.248495 kubelet[1787]: I0209 08:55:17.248448 1787 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:17.248891 kubelet[1787]: E0209 08:55:17.248877 1787 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.159.117:6443/api/v1/nodes\": dial tcp 143.198.159.117:6443: connect: connection refused" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:17.593065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950857904.mount: Deactivated successfully. Feb 9 08:55:17.598818 env[1192]: time="2024-02-09T08:55:17.598770443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.601860 env[1192]: time="2024-02-09T08:55:17.601777696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.607008 env[1192]: time="2024-02-09T08:55:17.606792090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.611805 env[1192]: time="2024-02-09T08:55:17.611739775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.613673 env[1192]: time="2024-02-09T08:55:17.613626269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.616207 env[1192]: time="2024-02-09T08:55:17.616161176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.618783 env[1192]: time="2024-02-09T08:55:17.618740151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.620533 env[1192]: time="2024-02-09T08:55:17.620491435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.624618 env[1192]: time="2024-02-09T08:55:17.624547183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.626517 env[1192]: time="2024-02-09T08:55:17.626447710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.628379 env[1192]: time="2024-02-09T08:55:17.628322785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.638346 env[1192]: time="2024-02-09T08:55:17.638266061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:17.671982 env[1192]: time="2024-02-09T08:55:17.671876575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:17.671982 env[1192]: time="2024-02-09T08:55:17.671983644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:17.672228 env[1192]: time="2024-02-09T08:55:17.672008926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:17.672228 env[1192]: time="2024-02-09T08:55:17.672157046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fddd75e20fad8c7a7797d3ac3723ce7e6a2312287ee3a70ade670be982636ade pid=1876 runtime=io.containerd.runc.v2 Feb 9 08:55:17.674775 env[1192]: time="2024-02-09T08:55:17.674681413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:17.674775 env[1192]: time="2024-02-09T08:55:17.674739220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:17.675163 env[1192]: time="2024-02-09T08:55:17.675093079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:17.675900 env[1192]: time="2024-02-09T08:55:17.675852383Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae368824d7924520578c79a8519c18917844f3b670066a7511df037734489eed pid=1864 runtime=io.containerd.runc.v2 Feb 9 08:55:17.681153 env[1192]: time="2024-02-09T08:55:17.681059756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:17.681389 env[1192]: time="2024-02-09T08:55:17.681358138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:17.681542 env[1192]: time="2024-02-09T08:55:17.681514134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:17.681922 env[1192]: time="2024-02-09T08:55:17.681886433Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3f4729f8925f91fa1863c80a2624706170caa188662bd2746d0b388d77f853c pid=1893 runtime=io.containerd.runc.v2 Feb 9 08:55:17.792411 env[1192]: time="2024-02-09T08:55:17.791866452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-6-9c47918d0b,Uid:517e1392fe4f0f6bdbee037e3e602ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae368824d7924520578c79a8519c18917844f3b670066a7511df037734489eed\"" Feb 9 08:55:17.795940 kubelet[1787]: E0209 08:55:17.794984 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.796586 env[1192]: time="2024-02-09T08:55:17.796533107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-6-9c47918d0b,Uid:9625b35f31936332fe19372e406be734,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3f4729f8925f91fa1863c80a2624706170caa188662bd2746d0b388d77f853c\"" Feb 9 08:55:17.797364 kubelet[1787]: E0209 08:55:17.797342 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.797672 env[1192]: time="2024-02-09T08:55:17.797643008Z" level=info msg="CreateContainer within sandbox \"ae368824d7924520578c79a8519c18917844f3b670066a7511df037734489eed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 08:55:17.799707 env[1192]: time="2024-02-09T08:55:17.799681593Z" level=info msg="CreateContainer within sandbox \"a3f4729f8925f91fa1863c80a2624706170caa188662bd2746d0b388d77f853c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 08:55:17.807204 env[1192]: time="2024-02-09T08:55:17.807158947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-6-9c47918d0b,Uid:321d96fe837fb20a67154c8af8e2c6bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fddd75e20fad8c7a7797d3ac3723ce7e6a2312287ee3a70ade670be982636ade\"" Feb 9 08:55:17.808471 kubelet[1787]: E0209 08:55:17.808443 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:17.812149 env[1192]: time="2024-02-09T08:55:17.812105704Z" level=info msg="CreateContainer within sandbox \"fddd75e20fad8c7a7797d3ac3723ce7e6a2312287ee3a70ade670be982636ade\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 08:55:17.823141 env[1192]: time="2024-02-09T08:55:17.823086423Z" level=info msg="CreateContainer within sandbox \"ae368824d7924520578c79a8519c18917844f3b670066a7511df037734489eed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6499132a138268731ca61dde23feff5a4d2582c0677eb2c1c4b47a252be6f63\"" Feb 9 08:55:17.824238 env[1192]: time="2024-02-09T08:55:17.824188680Z" level=info msg="StartContainer for \"f6499132a138268731ca61dde23feff5a4d2582c0677eb2c1c4b47a252be6f63\"" Feb 9 08:55:17.829247 env[1192]: time="2024-02-09T08:55:17.829175400Z" level=info msg="CreateContainer within sandbox \"a3f4729f8925f91fa1863c80a2624706170caa188662bd2746d0b388d77f853c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b752b4df6d0f1b8bd597f0f9533620baea50dec7b7dcce2d8a16f9ad10f9e9e6\"" Feb 9 08:55:17.829971 env[1192]: time="2024-02-09T08:55:17.829936365Z" level=info msg="StartContainer for \"b752b4df6d0f1b8bd597f0f9533620baea50dec7b7dcce2d8a16f9ad10f9e9e6\"" Feb 9 08:55:17.832158 env[1192]: time="2024-02-09T08:55:17.832105348Z" level=info msg="CreateContainer within sandbox \"fddd75e20fad8c7a7797d3ac3723ce7e6a2312287ee3a70ade670be982636ade\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42370bc9b2cd8f2a9f7535170174bd1d25643cd398a80013312115d2aaf4ea9e\"" Feb 9 08:55:17.832798 env[1192]: time="2024-02-09T08:55:17.832772275Z" level=info msg="StartContainer for \"42370bc9b2cd8f2a9f7535170174bd1d25643cd398a80013312115d2aaf4ea9e\"" Feb 9 08:55:17.883806 kubelet[1787]: W0209 08:55:17.883640 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://143.198.159.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.883806 kubelet[1787]: E0209 08:55:17.883701 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.159.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.899057 kubelet[1787]: W0209 08:55:17.898987 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://143.198.159.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.899057 kubelet[1787]: E0209 08:55:17.899052 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.159.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.942664 env[1192]: time="2024-02-09T08:55:17.942605954Z" level=info msg="StartContainer for \"b752b4df6d0f1b8bd597f0f9533620baea50dec7b7dcce2d8a16f9ad10f9e9e6\" returns successfully" Feb 9 08:55:17.955054 kubelet[1787]: E0209 08:55:17.955008 1787 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://143.198.159.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-6-9c47918d0b?timeout=10s": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:17.999164 env[1192]: time="2024-02-09T08:55:17.999095919Z" level=info msg="StartContainer for \"f6499132a138268731ca61dde23feff5a4d2582c0677eb2c1c4b47a252be6f63\" returns successfully" Feb 9 08:55:18.001046 env[1192]: time="2024-02-09T08:55:18.000993260Z" level=info msg="StartContainer for \"42370bc9b2cd8f2a9f7535170174bd1d25643cd398a80013312115d2aaf4ea9e\" returns successfully" Feb 9 08:55:18.050358 kubelet[1787]: I0209 08:55:18.050191 1787 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:18.050959 kubelet[1787]: E0209 08:55:18.050931 1787 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://143.198.159.117:6443/api/v1/nodes\": dial tcp 143.198.159.117:6443: connect: connection refused" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:18.107790 kubelet[1787]: W0209 08:55:18.107708 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://143.198.159.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-6-9c47918d0b&limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:18.108310 kubelet[1787]: E0209 08:55:18.108291 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.159.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-6-9c47918d0b&limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:18.186411 kubelet[1787]: W0209 08:55:18.186250 1787 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://143.198.159.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:18.186643 kubelet[1787]: E0209 08:55:18.186629 1787 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.159.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:18.615480 kubelet[1787]: E0209 08:55:18.615446 1787 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.159.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.159.117:6443: connect: connection refused Feb 9 08:55:18.700406 kubelet[1787]: E0209 08:55:18.700374 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:18.702940 kubelet[1787]: E0209 08:55:18.702912 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:18.705249 kubelet[1787]: E0209 08:55:18.705225 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:19.652843 kubelet[1787]: I0209 08:55:19.652814 1787 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:19.707217 kubelet[1787]: E0209 08:55:19.707191 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:19.708476 kubelet[1787]: E0209 08:55:19.708013 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:19.708672 kubelet[1787]: E0209 08:55:19.708365 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:20.708549 kubelet[1787]: E0209 08:55:20.708519 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:20.709945 kubelet[1787]: E0209 08:55:20.709923 1787 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:21.125080 kubelet[1787]: I0209 08:55:21.125038 1787 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:21.539181 kubelet[1787]: I0209 08:55:21.539119 1787 apiserver.go:52] "Watching apiserver" Feb 9 08:55:21.543872 kubelet[1787]: I0209 08:55:21.543834 1787 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 08:55:21.580825 kubelet[1787]: I0209 08:55:21.580776 1787 reconciler.go:41] "Reconciler: start to sync state" Feb 9 08:55:24.069529 systemd[1]: Reloading. Feb 9 08:55:24.165187 /usr/lib/systemd/system-generators/torcx-generator[2111]: time="2024-02-09T08:55:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 08:55:24.165777 /usr/lib/systemd/system-generators/torcx-generator[2111]: time="2024-02-09T08:55:24Z" level=info msg="torcx already run" Feb 9 08:55:24.267832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 08:55:24.268064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 08:55:24.286978 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 08:55:24.388102 systemd[1]: Stopping kubelet.service... Feb 9 08:55:24.404471 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 08:55:24.405029 systemd[1]: Stopped kubelet.service. Feb 9 08:55:24.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:24.406225 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 9 08:55:24.406319 kernel: audit: type=1131 audit(1707468924.403:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:24.408942 systemd[1]: Started kubelet.service. Feb 9 08:55:24.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:24.418593 kernel: audit: type=1130 audit(1707468924.410:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:24.523924 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 08:55:24.524396 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:55:24.524616 kubelet[2165]: I0209 08:55:24.524581 2165 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 08:55:24.526252 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 08:55:24.526423 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 08:55:24.530310 kubelet[2165]: I0209 08:55:24.530264 2165 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 08:55:24.530310 kubelet[2165]: I0209 08:55:24.530298 2165 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 08:55:24.530582 kubelet[2165]: I0209 08:55:24.530547 2165 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 08:55:24.533197 kubelet[2165]: I0209 08:55:24.533124 2165 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 08:55:24.539092 kubelet[2165]: I0209 08:55:24.539014 2165 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 08:55:24.539408 kubelet[2165]: I0209 08:55:24.539189 2165 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 08:55:24.539738 kubelet[2165]: I0209 08:55:24.539720 2165 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 08:55:24.539828 kubelet[2165]: I0209 08:55:24.539810 2165 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 08:55:24.540065 kubelet[2165]: I0209 08:55:24.539847 2165 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 08:55:24.540065 kubelet[2165]: I0209 08:55:24.539862 2165 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 08:55:24.540065 kubelet[2165]: I0209 08:55:24.539900 2165 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:55:24.544505 kubelet[2165]: I0209 08:55:24.544419 2165 kubelet.go:398] "Attempting to sync node with API server" Feb 9 08:55:24.544505 kubelet[2165]: I0209 08:55:24.544466 2165 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 08:55:24.544505 kubelet[2165]: I0209 08:55:24.544493 2165 kubelet.go:297] "Adding apiserver pod source" Feb 9 08:55:24.544505 kubelet[2165]: I0209 08:55:24.544510 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 08:55:24.550302 kubelet[2165]: I0209 08:55:24.550275 2165 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 08:55:24.551635 kubelet[2165]: I0209 08:55:24.551612 2165 server.go:1186] "Started kubelet" Feb 9 08:55:24.564590 kernel: audit: type=1400 audit(1707468924.556:238): avc: denied { mac_admin } for pid=2165 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:24.564895 kernel: audit: type=1401 audit(1707468924.556:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:24.556000 audit[2165]: AVC avc: denied { mac_admin } for pid=2165 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:24.556000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:24.556000 audit[2165]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a992f0 a1=c000ab7458 a2=c000a992c0 a3=25 items=0 ppid=1 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:24.556000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:24.574744 kernel: audit: type=1300 audit(1707468924.556:238): arch=c000003e syscall=188 success=no exit=-22 a0=c000a992f0 a1=c000ab7458 a2=c000a992c0 a3=25 items=0 ppid=1 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:24.574869 kernel: audit: type=1327 audit(1707468924.556:238): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:24.574899 kubelet[2165]: I0209 08:55:24.574821 2165 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 08:55:24.575811 kubelet[2165]: I0209 08:55:24.575703 2165 server.go:451] "Adding debug handlers to kubelet server" Feb 9 08:55:24.576000 audit[2165]: AVC avc: denied { mac_admin } for pid=2165 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:24.580463 kubelet[2165]: I0209 08:55:24.577977 2165 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 08:55:24.580463 kubelet[2165]: E0209 08:55:24.578290 2165 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 08:55:24.580463 kubelet[2165]: E0209 08:55:24.578331 2165 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 08:55:24.583789 kernel: audit: type=1400 audit(1707468924.576:239): avc: denied { mac_admin } for pid=2165 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:24.583947 kubelet[2165]: I0209 08:55:24.583664 2165 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 08:55:24.583947 kubelet[2165]: I0209 08:55:24.583835 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 08:55:24.592604 kernel: audit: type=1401 audit(1707468924.576:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:24.592879 kernel: audit: type=1300 audit(1707468924.576:239): arch=c000003e syscall=188 success=no exit=-22 a0=c000de2f20 a1=c000ab76c8 a2=c000ec25a0 a3=25 items=0 ppid=1 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:24.576000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:24.576000 audit[2165]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000de2f20 a1=c000ab76c8 a2=c000ec25a0 a3=25 items=0 ppid=1 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:24.593336 kubelet[2165]: I0209 08:55:24.593307 2165 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 08:55:24.576000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:24.599504 kubelet[2165]: I0209 08:55:24.594955 2165 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 08:55:24.599588 kernel: audit: type=1327 audit(1707468924.576:239): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:24.683185 kubelet[2165]: I0209 08:55:24.680664 2165 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 08:55:24.708105 kubelet[2165]: I0209 08:55:24.707887 2165 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:24.726877 kubelet[2165]: I0209 08:55:24.726715 2165 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:24.727706 kubelet[2165]: I0209 08:55:24.727679 2165 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:24.833423 kubelet[2165]: I0209 08:55:24.833393 2165 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 08:55:24.835377 kubelet[2165]: I0209 08:55:24.835318 2165 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 08:55:24.835532 kubelet[2165]: I0209 08:55:24.835518 2165 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 08:55:24.837988 kubelet[2165]: E0209 08:55:24.835746 2165 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 08:55:24.902009 kubelet[2165]: I0209 08:55:24.901981 2165 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 08:55:24.902215 kubelet[2165]: I0209 08:55:24.902202 2165 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 08:55:24.902295 kubelet[2165]: I0209 08:55:24.902285 2165 state_mem.go:36] "Initialized new in-memory state store" Feb 9 08:55:24.903073 kubelet[2165]: I0209 08:55:24.903052 2165 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 08:55:24.903238 kubelet[2165]: I0209 08:55:24.903223 2165 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 08:55:24.903320 kubelet[2165]: I0209 08:55:24.903310 2165 policy_none.go:49] "None policy: Start" Feb 9 08:55:24.904131 kubelet[2165]: I0209 08:55:24.904112 2165 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 08:55:24.904351 kubelet[2165]: I0209 08:55:24.904332 2165 state_mem.go:35] "Initializing new in-memory state store" Feb 9 08:55:24.905163 kubelet[2165]: I0209 08:55:24.905143 2165 state_mem.go:75] "Updated machine memory state" Feb 9 08:55:24.907697 kubelet[2165]: I0209 08:55:24.907661 2165 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 08:55:24.906000 audit[2165]: AVC avc: denied { mac_admin } for pid=2165 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:55:24.906000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 08:55:24.906000 audit[2165]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001542510 a1=c00102df38 a2=c0015424e0 a3=25 items=0 ppid=1 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:24.906000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 08:55:24.910488 kubelet[2165]: I0209 08:55:24.907747 2165 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 08:55:24.911707 kubelet[2165]: I0209 08:55:24.911672 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 08:55:24.942752 kubelet[2165]: I0209 08:55:24.942621 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:24.943395 kubelet[2165]: I0209 08:55:24.942731 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:24.943510 kubelet[2165]: I0209 08:55:24.943452 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:25.013840 kubelet[2165]: I0209 08:55:25.013802 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014018 kubelet[2165]: I0209 08:55:25.013856 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014018 kubelet[2165]: I0209 08:55:25.013886 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014018 kubelet[2165]: I0209 08:55:25.013945 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014018 kubelet[2165]: I0209 08:55:25.013995 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/321d96fe837fb20a67154c8af8e2c6bb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-6-9c47918d0b\" (UID: \"321d96fe837fb20a67154c8af8e2c6bb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014150 kubelet[2165]: I0209 08:55:25.014022 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/517e1392fe4f0f6bdbee037e3e602ab2-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" (UID: \"517e1392fe4f0f6bdbee037e3e602ab2\") " pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014150 kubelet[2165]: I0209 08:55:25.014045 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014150 kubelet[2165]: I0209 08:55:25.014077 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.014150 kubelet[2165]: I0209 08:55:25.014103 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9625b35f31936332fe19372e406be734-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" (UID: \"9625b35f31936332fe19372e406be734\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.265255 kubelet[2165]: E0209 08:55:25.265206 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:25.266596 kubelet[2165]: E0209 08:55:25.266550 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:25.272626 kubelet[2165]: E0209 08:55:25.272593 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:25.550862 kubelet[2165]: I0209 08:55:25.550718 2165 apiserver.go:52] "Watching apiserver" Feb 9 08:55:25.595511 kubelet[2165]: I0209 08:55:25.595453 2165 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 08:55:25.617933 kubelet[2165]: I0209 08:55:25.617881 2165 reconciler.go:41] "Reconciler: start to sync state" Feb 9 08:55:25.886702 kubelet[2165]: E0209 08:55:25.886580 2165 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-6-9c47918d0b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.887338 kubelet[2165]: E0209 08:55:25.887313 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:25.953370 kubelet[2165]: E0209 08:55:25.953320 2165 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-6-9c47918d0b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:25.953989 kubelet[2165]: E0209 08:55:25.953973 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:26.153444 kubelet[2165]: E0209 08:55:26.153314 2165 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-6-9c47918d0b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" Feb 9 08:55:26.154036 kubelet[2165]: E0209 08:55:26.153999 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:26.881292 kubelet[2165]: E0209 08:55:26.881261 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:26.882241 kubelet[2165]: E0209 08:55:26.882149 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:26.882826 kubelet[2165]: E0209 08:55:26.882797 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:26.966723 kubelet[2165]: I0209 08:55:26.966683 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-6-9c47918d0b" podStartSLOduration=2.9656832250000003 pod.CreationTimestamp="2024-02-09 08:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:26.65577802 +0000 UTC m=+2.234333220" watchObservedRunningTime="2024-02-09 08:55:26.965683225 +0000 UTC m=+2.544238439" Feb 9 08:55:27.363131 kubelet[2165]: I0209 08:55:27.363090 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-6-9c47918d0b" podStartSLOduration=3.363033929 pod.CreationTimestamp="2024-02-09 08:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:26.967119682 +0000 UTC m=+2.545674909" watchObservedRunningTime="2024-02-09 08:55:27.363033929 +0000 UTC m=+2.941589128" Feb 9 08:55:27.881844 kubelet[2165]: E0209 08:55:27.881799 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:28.466525 kubelet[2165]: E0209 08:55:28.466484 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:28.483523 kubelet[2165]: I0209 08:55:28.483485 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-6-9c47918d0b" podStartSLOduration=4.48343912 pod.CreationTimestamp="2024-02-09 08:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:27.363807866 +0000 UTC m=+2.942363080" watchObservedRunningTime="2024-02-09 08:55:28.48343912 +0000 UTC m=+4.061994336" Feb 9 08:55:28.885580 kubelet[2165]: E0209 08:55:28.885522 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:29.263837 sudo[1368]: pam_unix(sudo:session): session closed for user root Feb 9 08:55:29.262000 audit[1368]: USER_END pid=1368 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:55:29.263000 audit[1368]: CRED_DISP pid=1368 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 08:55:29.272906 sshd[1362]: pam_unix(sshd:session): session closed for user core Feb 9 08:55:29.273000 audit[1362]: USER_END pid=1362 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:55:29.273000 audit[1362]: CRED_DISP pid=1362 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:55:29.279782 systemd-logind[1182]: Session 7 logged out. Waiting for processes to exit. Feb 9 08:55:29.280605 systemd[1]: sshd@6-143.198.159.117:22-139.178.89.65:38534.service: Deactivated successfully. Feb 9 08:55:29.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.159.117:22-139.178.89.65:38534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:55:29.282069 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 08:55:29.283242 systemd-logind[1182]: Removed session 7. Feb 9 08:55:35.500399 update_engine[1183]: I0209 08:55:35.499987 1183 update_attempter.cc:509] Updating boot flags... Feb 9 08:55:35.709165 kubelet[2165]: E0209 08:55:35.708765 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:35.897295 kubelet[2165]: E0209 08:55:35.896924 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:36.241832 kubelet[2165]: E0209 08:55:36.241780 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:36.897850 kubelet[2165]: E0209 08:55:36.897818 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:37.812108 kubelet[2165]: I0209 08:55:37.812074 2165 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 08:55:37.812606 env[1192]: time="2024-02-09T08:55:37.812550707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 08:55:37.813331 kubelet[2165]: I0209 08:55:37.813304 2165 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 08:55:37.829152 kubelet[2165]: I0209 08:55:37.829099 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:37.841337 kubelet[2165]: W0209 08:55:37.841287 2165 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-6-9c47918d0b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-6-9c47918d0b' and this object Feb 9 08:55:37.841337 kubelet[2165]: E0209 08:55:37.841346 2165 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-6-9c47918d0b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-6-9c47918d0b' and this object Feb 9 08:55:37.843793 kubelet[2165]: W0209 08:55:37.843755 2165 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-6-9c47918d0b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-6-9c47918d0b' and this object Feb 9 08:55:37.843793 kubelet[2165]: E0209 08:55:37.843796 2165 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-6-9c47918d0b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-6-9c47918d0b' and this object Feb 9 08:55:37.892818 kubelet[2165]: I0209 08:55:37.892764 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f11a999a-b084-44d0-b281-ad8c0c74942b-lib-modules\") pod \"kube-proxy-bk2z6\" (UID: \"f11a999a-b084-44d0-b281-ad8c0c74942b\") " pod="kube-system/kube-proxy-bk2z6" Feb 9 08:55:37.893013 kubelet[2165]: I0209 08:55:37.892850 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f11a999a-b084-44d0-b281-ad8c0c74942b-kube-proxy\") pod \"kube-proxy-bk2z6\" (UID: \"f11a999a-b084-44d0-b281-ad8c0c74942b\") " pod="kube-system/kube-proxy-bk2z6" Feb 9 08:55:37.893013 kubelet[2165]: I0209 08:55:37.892878 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f11a999a-b084-44d0-b281-ad8c0c74942b-xtables-lock\") pod \"kube-proxy-bk2z6\" (UID: \"f11a999a-b084-44d0-b281-ad8c0c74942b\") " pod="kube-system/kube-proxy-bk2z6" Feb 9 08:55:37.893013 kubelet[2165]: I0209 08:55:37.892902 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6sqj\" (UniqueName: \"kubernetes.io/projected/f11a999a-b084-44d0-b281-ad8c0c74942b-kube-api-access-m6sqj\") pod \"kube-proxy-bk2z6\" (UID: \"f11a999a-b084-44d0-b281-ad8c0c74942b\") " pod="kube-system/kube-proxy-bk2z6" Feb 9 08:55:38.751836 kubelet[2165]: I0209 08:55:38.751790 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:38.798649 kubelet[2165]: I0209 08:55:38.798595 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfc980be-fb38-4942-b799-b086c7387b9b-var-lib-calico\") pod \"tigera-operator-cfc98749c-7rvzr\" (UID: \"cfc980be-fb38-4942-b799-b086c7387b9b\") " pod="tigera-operator/tigera-operator-cfc98749c-7rvzr" Feb 9 08:55:38.798649 kubelet[2165]: I0209 08:55:38.798671 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wgq\" (UniqueName: \"kubernetes.io/projected/cfc980be-fb38-4942-b799-b086c7387b9b-kube-api-access-m9wgq\") pod \"tigera-operator-cfc98749c-7rvzr\" (UID: \"cfc980be-fb38-4942-b799-b086c7387b9b\") " pod="tigera-operator/tigera-operator-cfc98749c-7rvzr" Feb 9 08:55:39.009981 kubelet[2165]: E0209 08:55:39.009523 2165 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 08:55:39.010178 kubelet[2165]: E0209 08:55:39.010159 2165 projected.go:198] Error preparing data for projected volume kube-api-access-m6sqj for pod kube-system/kube-proxy-bk2z6: failed to sync configmap cache: timed out waiting for the condition Feb 9 08:55:39.010373 kubelet[2165]: E0209 08:55:39.010359 2165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f11a999a-b084-44d0-b281-ad8c0c74942b-kube-api-access-m6sqj podName:f11a999a-b084-44d0-b281-ad8c0c74942b nodeName:}" failed. No retries permitted until 2024-02-09 08:55:39.510330126 +0000 UTC m=+15.088885336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m6sqj" (UniqueName: "kubernetes.io/projected/f11a999a-b084-44d0-b281-ad8c0c74942b-kube-api-access-m6sqj") pod "kube-proxy-bk2z6" (UID: "f11a999a-b084-44d0-b281-ad8c0c74942b") : failed to sync configmap cache: timed out waiting for the condition Feb 9 08:55:39.056324 env[1192]: time="2024-02-09T08:55:39.056186123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-7rvzr,Uid:cfc980be-fb38-4942-b799-b086c7387b9b,Namespace:tigera-operator,Attempt:0,}" Feb 9 08:55:39.083259 env[1192]: time="2024-02-09T08:55:39.083151498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:39.083259 env[1192]: time="2024-02-09T08:55:39.083195286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:39.083259 env[1192]: time="2024-02-09T08:55:39.083210549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:39.083712 env[1192]: time="2024-02-09T08:55:39.083647239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db88a3e199c9576d886dd71811b1193618c1d8856fca8158b8991df2190fbce pid=2284 runtime=io.containerd.runc.v2 Feb 9 08:55:39.165180 env[1192]: time="2024-02-09T08:55:39.165123471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-7rvzr,Uid:cfc980be-fb38-4942-b799-b086c7387b9b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1db88a3e199c9576d886dd71811b1193618c1d8856fca8158b8991df2190fbce\"" Feb 9 08:55:39.169810 env[1192]: time="2024-02-09T08:55:39.169249882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 08:55:39.634072 kubelet[2165]: E0209 08:55:39.634025 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:39.635872 env[1192]: time="2024-02-09T08:55:39.635823737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bk2z6,Uid:f11a999a-b084-44d0-b281-ad8c0c74942b,Namespace:kube-system,Attempt:0,}" Feb 9 08:55:39.655163 env[1192]: time="2024-02-09T08:55:39.655077109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:39.655163 env[1192]: time="2024-02-09T08:55:39.655120542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:39.655396 env[1192]: time="2024-02-09T08:55:39.655139012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:39.655708 env[1192]: time="2024-02-09T08:55:39.655668318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76985d1d3ce8416759f61666a26e2d2454310271d6ddad69a26a1a1318800cc6 pid=2330 runtime=io.containerd.runc.v2 Feb 9 08:55:39.703121 env[1192]: time="2024-02-09T08:55:39.703063126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bk2z6,Uid:f11a999a-b084-44d0-b281-ad8c0c74942b,Namespace:kube-system,Attempt:0,} returns sandbox id \"76985d1d3ce8416759f61666a26e2d2454310271d6ddad69a26a1a1318800cc6\"" Feb 9 08:55:39.703994 kubelet[2165]: E0209 08:55:39.703821 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:39.708768 env[1192]: time="2024-02-09T08:55:39.708722125Z" level=info msg="CreateContainer within sandbox \"76985d1d3ce8416759f61666a26e2d2454310271d6ddad69a26a1a1318800cc6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 08:55:39.736316 env[1192]: time="2024-02-09T08:55:39.736262325Z" level=info msg="CreateContainer within sandbox \"76985d1d3ce8416759f61666a26e2d2454310271d6ddad69a26a1a1318800cc6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"490cf2db55100300d5772920e26b84d643b396b672127609784a74e12ad3229b\"" Feb 9 08:55:39.737291 env[1192]: time="2024-02-09T08:55:39.737243867Z" level=info msg="StartContainer for \"490cf2db55100300d5772920e26b84d643b396b672127609784a74e12ad3229b\"" Feb 9 08:55:39.808892 env[1192]: time="2024-02-09T08:55:39.808832640Z" level=info msg="StartContainer for \"490cf2db55100300d5772920e26b84d643b396b672127609784a74e12ad3229b\" returns successfully" Feb 9 08:55:39.907230 kubelet[2165]: E0209 08:55:39.907082 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:39.922054 systemd[1]: run-containerd-runc-k8s.io-1db88a3e199c9576d886dd71811b1193618c1d8856fca8158b8991df2190fbce-runc.TYf3Bs.mount: Deactivated successfully. Feb 9 08:55:39.929905 kubelet[2165]: I0209 08:55:39.929861 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bk2z6" podStartSLOduration=2.929818075 pod.CreationTimestamp="2024-02-09 08:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:39.929064738 +0000 UTC m=+15.507619958" watchObservedRunningTime="2024-02-09 08:55:39.929818075 +0000 UTC m=+15.508373293" Feb 9 08:55:40.039000 audit[2419]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.043072 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 08:55:40.043194 kernel: audit: type=1325 audit(1707468940.039:246): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.039000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1fbe23c0 a2=0 a3=7ffd1fbe23ac items=0 ppid=2381 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 08:55:40.057322 kernel: audit: type=1300 audit(1707468940.039:246): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1fbe23c0 a2=0 a3=7ffd1fbe23ac items=0 ppid=2381 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.057472 kernel: audit: type=1327 audit(1707468940.039:246): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 08:55:40.045000 audit[2420]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.064812 kernel: audit: type=1325 audit(1707468940.045:247): table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.064962 kernel: audit: type=1300 audit(1707468940.045:247): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2e2516b0 a2=0 a3=7fff2e25169c items=0 ppid=2381 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.045000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2e2516b0 a2=0 a3=7fff2e25169c items=0 ppid=2381 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 08:55:40.077608 kernel: audit: type=1327 audit(1707468940.045:247): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 08:55:40.045000 audit[2421]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.082595 kernel: audit: type=1325 audit(1707468940.045:248): table=nat:61 family=10 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.045000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffacf1380 a2=0 a3=7ffffacf136c items=0 ppid=2381 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 08:55:40.093415 kernel: audit: type=1300 audit(1707468940.045:248): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffacf1380 a2=0 a3=7ffffacf136c items=0 ppid=2381 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.093654 kernel: audit: type=1327 audit(1707468940.045:248): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 08:55:40.093706 kernel: audit: type=1325 audit(1707468940.045:249): table=filter:62 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.045000 audit[2422]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.045000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2c0896b0 a2=0 a3=7ffe2c08969c items=0 ppid=2381 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 08:55:40.081000 audit[2423]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.081000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaadc4d30 a2=0 a3=7ffeaadc4d1c items=0 ppid=2381 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 08:55:40.089000 audit[2424]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.089000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff488d1cb0 a2=0 a3=7fff488d1c9c items=0 ppid=2381 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 08:55:40.162000 audit[2425]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.162000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffd5061960 a2=0 a3=7fffd506194c items=0 ppid=2381 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 08:55:40.167000 audit[2427]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.167000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff73c7b160 a2=0 a3=7fff73c7b14c items=0 ppid=2381 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 08:55:40.179000 audit[2430]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.179000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe28cfa4d0 a2=0 a3=7ffe28cfa4bc items=0 ppid=2381 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 08:55:40.181000 audit[2431]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.181000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd47386420 a2=0 a3=7ffd4738640c items=0 ppid=2381 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 08:55:40.185000 audit[2433]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.185000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcec6fcf80 a2=0 a3=7ffcec6fcf6c items=0 ppid=2381 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 08:55:40.186000 audit[2434]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.186000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9df03a10 a2=0 a3=7fff9df039fc items=0 ppid=2381 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 08:55:40.189000 audit[2436]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.189000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd80022700 a2=0 a3=7ffd800226ec items=0 ppid=2381 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 08:55:40.194000 audit[2439]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.194000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc743c6160 a2=0 a3=7ffc743c614c items=0 ppid=2381 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 08:55:40.195000 audit[2440]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.195000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcebf41d70 a2=0 a3=7ffcebf41d5c items=0 ppid=2381 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 08:55:40.199000 audit[2442]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.199000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc72a6e40 a2=0 a3=7ffdc72a6e2c items=0 ppid=2381 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 08:55:40.201000 audit[2443]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.201000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc73d80660 a2=0 a3=7ffc73d8064c items=0 ppid=2381 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 08:55:40.206000 audit[2445]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.206000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0cec7020 a2=0 a3=7ffe0cec700c items=0 ppid=2381 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 08:55:40.211000 audit[2448]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.211000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd25823040 a2=0 a3=7ffd2582302c items=0 ppid=2381 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 08:55:40.219000 audit[2451]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.219000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5306fcd0 a2=0 a3=7ffd5306fcbc items=0 ppid=2381 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 08:55:40.221000 audit[2452]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.221000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff51714520 a2=0 a3=7fff5171450c items=0 ppid=2381 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 08:55:40.226000 audit[2454]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.226000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffeb2efa350 a2=0 a3=7ffeb2efa33c items=0 ppid=2381 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 08:55:40.231000 audit[2457]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 08:55:40.231000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd903ac0b0 a2=0 a3=7ffd903ac09c items=0 ppid=2381 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 08:55:40.250000 audit[2461]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:40.250000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe64580ce0 a2=0 a3=7ffe64580ccc items=0 ppid=2381 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:40.258000 audit[2461]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:40.258000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe64580ce0 a2=0 a3=7ffe64580ccc items=0 ppid=2381 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:40.264000 audit[2466]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.264000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebbdf2910 a2=0 a3=7ffebbdf28fc items=0 ppid=2381 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 08:55:40.269000 audit[2468]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.269000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff97fa0820 a2=0 a3=7fff97fa080c items=0 ppid=2381 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 08:55:40.272000 audit[2471]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.272000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd1e38a7b0 a2=0 a3=7ffd1e38a79c items=0 ppid=2381 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 08:55:40.274000 audit[2472]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.274000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee64fc710 a2=0 a3=7ffee64fc6fc items=0 ppid=2381 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 08:55:40.277000 audit[2474]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.277000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc87aded0 a2=0 a3=7ffdc87adebc items=0 ppid=2381 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 08:55:40.279000 audit[2475]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.279000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb6461bf0 a2=0 a3=7ffdb6461bdc items=0 ppid=2381 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 08:55:40.283000 audit[2477]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.283000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc40a5ed0 a2=0 a3=7ffcc40a5ebc items=0 ppid=2381 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 08:55:40.289000 audit[2480]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.289000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd1e248440 a2=0 a3=7ffd1e24842c items=0 ppid=2381 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 08:55:40.290000 audit[2481]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.290000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc79ad740 a2=0 a3=7ffdc79ad72c items=0 ppid=2381 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 08:55:40.294000 audit[2483]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.294000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe6a1ef780 a2=0 a3=7ffe6a1ef76c items=0 ppid=2381 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 08:55:40.295000 audit[2484]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.295000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc80265ea0 a2=0 a3=7ffc80265e8c items=0 ppid=2381 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 08:55:40.298000 audit[2486]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.298000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd5a0f6a0 a2=0 a3=7fffd5a0f68c items=0 ppid=2381 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 08:55:40.305000 audit[2489]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.305000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffff5ce400 a2=0 a3=7fffff5ce3ec items=0 ppid=2381 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 08:55:40.309000 audit[2492]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.309000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3b473dd0 a2=0 a3=7ffd3b473dbc items=0 ppid=2381 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 08:55:40.311000 audit[2493]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.311000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7e1e4090 a2=0 a3=7ffc7e1e407c items=0 ppid=2381 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 08:55:40.314000 audit[2495]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.314000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdceae3fb0 a2=0 a3=7ffdceae3f9c items=0 ppid=2381 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 08:55:40.318000 audit[2498]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 08:55:40.318000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff7d488dc0 a2=0 a3=7fff7d488dac items=0 ppid=2381 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 08:55:40.328000 audit[2502]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 08:55:40.328000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff4576eef0 a2=0 a3=7fff4576eedc items=0 ppid=2381 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.328000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:40.329000 audit[2502]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 08:55:40.329000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fff4576eef0 a2=0 a3=7fff4576eedc items=0 ppid=2381 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:40.329000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:40.440638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11877244.mount: Deactivated successfully. Feb 9 08:55:40.911092 kubelet[2165]: E0209 08:55:40.911002 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:41.634164 env[1192]: time="2024-02-09T08:55:41.634093053Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:41.636995 env[1192]: time="2024-02-09T08:55:41.636941533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:41.640909 env[1192]: time="2024-02-09T08:55:41.640851575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:41.645204 env[1192]: time="2024-02-09T08:55:41.645141091Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:41.646122 env[1192]: time="2024-02-09T08:55:41.646072475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 08:55:41.651039 env[1192]: time="2024-02-09T08:55:41.650969882Z" level=info msg="CreateContainer within sandbox \"1db88a3e199c9576d886dd71811b1193618c1d8856fca8158b8991df2190fbce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 08:55:41.666112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678974608.mount: Deactivated successfully. Feb 9 08:55:41.675607 env[1192]: time="2024-02-09T08:55:41.675531734Z" level=info msg="CreateContainer within sandbox \"1db88a3e199c9576d886dd71811b1193618c1d8856fca8158b8991df2190fbce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"507b53e3fdb3d33575a8b2fb9f4e434d5662009a399403fb06df64b0364bda0b\"" Feb 9 08:55:41.677901 env[1192]: time="2024-02-09T08:55:41.677862132Z" level=info msg="StartContainer for \"507b53e3fdb3d33575a8b2fb9f4e434d5662009a399403fb06df64b0364bda0b\"" Feb 9 08:55:41.758423 env[1192]: time="2024-02-09T08:55:41.758372523Z" level=info msg="StartContainer for \"507b53e3fdb3d33575a8b2fb9f4e434d5662009a399403fb06df64b0364bda0b\" returns successfully" Feb 9 08:55:41.926672 kubelet[2165]: I0209 08:55:41.926530 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-7rvzr" podStartSLOduration=-9.223372032928293e+09 pod.CreationTimestamp="2024-02-09 08:55:38 +0000 UTC" firstStartedPulling="2024-02-09 08:55:39.167293966 +0000 UTC m=+14.745849167" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:41.92633457 +0000 UTC m=+17.504889788" watchObservedRunningTime="2024-02-09 08:55:41.926482264 +0000 UTC m=+17.505037488" Feb 9 08:55:43.856000 audit[2567]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:43.856000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffedfbc5f10 a2=0 a3=7ffedfbc5efc items=0 ppid=2381 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:43.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:43.856000 audit[2567]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:43.856000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffedfbc5f10 a2=0 a3=7ffedfbc5efc items=0 ppid=2381 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:43.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:43.920000 audit[2593]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:43.920000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd56cd4ff0 a2=0 a3=7ffd56cd4fdc items=0 ppid=2381 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:43.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:43.933000 audit[2593]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:43.933000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd56cd4ff0 a2=0 a3=7ffd56cd4fdc items=0 ppid=2381 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:43.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:43.951780 kubelet[2165]: I0209 08:55:43.951731 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:44.029108 kubelet[2165]: I0209 08:55:44.029070 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd28t\" (UniqueName: \"kubernetes.io/projected/bda6b8ce-d0d0-4c91-a727-6efedb4d66ed-kube-api-access-pd28t\") pod \"calico-typha-56d879d77b-pgjhk\" (UID: \"bda6b8ce-d0d0-4c91-a727-6efedb4d66ed\") " pod="calico-system/calico-typha-56d879d77b-pgjhk" Feb 9 08:55:44.029341 kubelet[2165]: I0209 08:55:44.029327 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda6b8ce-d0d0-4c91-a727-6efedb4d66ed-tigera-ca-bundle\") pod \"calico-typha-56d879d77b-pgjhk\" (UID: \"bda6b8ce-d0d0-4c91-a727-6efedb4d66ed\") " pod="calico-system/calico-typha-56d879d77b-pgjhk" Feb 9 08:55:44.029463 kubelet[2165]: I0209 08:55:44.029452 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bda6b8ce-d0d0-4c91-a727-6efedb4d66ed-typha-certs\") pod \"calico-typha-56d879d77b-pgjhk\" (UID: \"bda6b8ce-d0d0-4c91-a727-6efedb4d66ed\") " pod="calico-system/calico-typha-56d879d77b-pgjhk" Feb 9 08:55:44.036076 kubelet[2165]: I0209 08:55:44.036022 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:44.130638 kubelet[2165]: I0209 08:55:44.130487 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-cni-net-dir\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130638 kubelet[2165]: I0209 08:55:44.130541 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-policysync\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130638 kubelet[2165]: I0209 08:55:44.130604 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-lib-modules\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130638 kubelet[2165]: I0209 08:55:44.130631 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-cni-bin-dir\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130961 kubelet[2165]: I0209 08:55:44.130653 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qhx\" (UniqueName: \"kubernetes.io/projected/58e67f8c-ff68-439b-917e-7d64a9374bdb-kube-api-access-87qhx\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130961 kubelet[2165]: I0209 08:55:44.130677 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-var-run-calico\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130961 kubelet[2165]: I0209 08:55:44.130699 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-xtables-lock\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130961 kubelet[2165]: I0209 08:55:44.130741 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-cni-log-dir\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.130961 kubelet[2165]: I0209 08:55:44.130776 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e67f8c-ff68-439b-917e-7d64a9374bdb-tigera-ca-bundle\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.131191 kubelet[2165]: I0209 08:55:44.130807 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-flexvol-driver-host\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.131191 kubelet[2165]: I0209 08:55:44.130832 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58e67f8c-ff68-439b-917e-7d64a9374bdb-node-certs\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.131191 kubelet[2165]: I0209 08:55:44.130870 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58e67f8c-ff68-439b-917e-7d64a9374bdb-var-lib-calico\") pod \"calico-node-2cxj6\" (UID: \"58e67f8c-ff68-439b-917e-7d64a9374bdb\") " pod="calico-system/calico-node-2cxj6" Feb 9 08:55:44.173688 kubelet[2165]: I0209 08:55:44.173630 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:55:44.173959 kubelet[2165]: E0209 08:55:44.173926 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:44.231171 kubelet[2165]: I0209 08:55:44.231118 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a86e9fee-b3a3-441d-8e06-482d03abae6a-kubelet-dir\") pod \"csi-node-driver-kk2hr\" (UID: \"a86e9fee-b3a3-441d-8e06-482d03abae6a\") " pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:55:44.231346 kubelet[2165]: I0209 08:55:44.231215 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a86e9fee-b3a3-441d-8e06-482d03abae6a-varrun\") pod \"csi-node-driver-kk2hr\" (UID: \"a86e9fee-b3a3-441d-8e06-482d03abae6a\") " pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:55:44.231346 kubelet[2165]: I0209 08:55:44.231289 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a86e9fee-b3a3-441d-8e06-482d03abae6a-registration-dir\") pod \"csi-node-driver-kk2hr\" (UID: \"a86e9fee-b3a3-441d-8e06-482d03abae6a\") " pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:55:44.231346 kubelet[2165]: I0209 08:55:44.231334 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a86e9fee-b3a3-441d-8e06-482d03abae6a-socket-dir\") pod \"csi-node-driver-kk2hr\" (UID: \"a86e9fee-b3a3-441d-8e06-482d03abae6a\") " pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:55:44.231480 kubelet[2165]: I0209 08:55:44.231379 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9hpq\" (UniqueName: \"kubernetes.io/projected/a86e9fee-b3a3-441d-8e06-482d03abae6a-kube-api-access-g9hpq\") pod \"csi-node-driver-kk2hr\" (UID: \"a86e9fee-b3a3-441d-8e06-482d03abae6a\") " pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:55:44.241504 kubelet[2165]: E0209 08:55:44.241416 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.241504 kubelet[2165]: W0209 08:55:44.241499 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.241778 kubelet[2165]: E0209 08:55:44.241555 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.255652 kubelet[2165]: E0209 08:55:44.255604 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:44.256188 env[1192]: time="2024-02-09T08:55:44.256148956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d879d77b-pgjhk,Uid:bda6b8ce-d0d0-4c91-a727-6efedb4d66ed,Namespace:calico-system,Attempt:0,}" Feb 9 08:55:44.288048 env[1192]: time="2024-02-09T08:55:44.287944818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:44.288401 env[1192]: time="2024-02-09T08:55:44.288305958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:44.288704 env[1192]: time="2024-02-09T08:55:44.288532546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:44.289222 env[1192]: time="2024-02-09T08:55:44.289162900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ed677cb3b371ab5e53aa8eb0d77c236255a8b32932188acc01aba8a2f02d0f8 pid=2605 runtime=io.containerd.runc.v2 Feb 9 08:55:44.332098 kubelet[2165]: E0209 08:55:44.331919 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.332098 kubelet[2165]: W0209 08:55:44.331943 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.332098 kubelet[2165]: E0209 08:55:44.331966 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.332760 kubelet[2165]: E0209 08:55:44.332351 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.332760 kubelet[2165]: W0209 08:55:44.332366 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.332760 kubelet[2165]: E0209 08:55:44.332391 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.332760 kubelet[2165]: E0209 08:55:44.332636 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.332760 kubelet[2165]: W0209 08:55:44.332644 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.332760 kubelet[2165]: E0209 08:55:44.332658 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.333207 kubelet[2165]: E0209 08:55:44.333092 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.333207 kubelet[2165]: W0209 08:55:44.333104 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.333207 kubelet[2165]: E0209 08:55:44.333116 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.333470 kubelet[2165]: E0209 08:55:44.333360 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.333470 kubelet[2165]: W0209 08:55:44.333370 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.333470 kubelet[2165]: E0209 08:55:44.333382 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.333710 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.334300 kubelet[2165]: W0209 08:55:44.333720 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.333732 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.333972 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.334300 kubelet[2165]: W0209 08:55:44.333983 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.333998 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.334154 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.334300 kubelet[2165]: W0209 08:55:44.334165 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.334300 kubelet[2165]: E0209 08:55:44.334180 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.335093 kubelet[2165]: E0209 08:55:44.334650 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.335093 kubelet[2165]: W0209 08:55:44.334659 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.335093 kubelet[2165]: E0209 08:55:44.334671 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.335093 kubelet[2165]: E0209 08:55:44.334895 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.335093 kubelet[2165]: W0209 08:55:44.334901 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.335093 kubelet[2165]: E0209 08:55:44.334911 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.336220 kubelet[2165]: E0209 08:55:44.336050 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.336220 kubelet[2165]: W0209 08:55:44.336063 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.336220 kubelet[2165]: E0209 08:55:44.336078 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.336470 kubelet[2165]: E0209 08:55:44.336393 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.336470 kubelet[2165]: W0209 08:55:44.336402 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.336470 kubelet[2165]: E0209 08:55:44.336414 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.336798 kubelet[2165]: E0209 08:55:44.336786 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.336982 kubelet[2165]: W0209 08:55:44.336857 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.336982 kubelet[2165]: E0209 08:55:44.336875 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.337192 kubelet[2165]: E0209 08:55:44.337178 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.337274 kubelet[2165]: W0209 08:55:44.337259 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.337365 kubelet[2165]: E0209 08:55:44.337352 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.337729 kubelet[2165]: E0209 08:55:44.337715 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.337827 kubelet[2165]: W0209 08:55:44.337814 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.337901 kubelet[2165]: E0209 08:55:44.337890 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.338139 kubelet[2165]: E0209 08:55:44.338128 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.338272 kubelet[2165]: W0209 08:55:44.338254 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.338348 kubelet[2165]: E0209 08:55:44.338338 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.338612 kubelet[2165]: E0209 08:55:44.338544 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.338707 kubelet[2165]: W0209 08:55:44.338693 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.338780 kubelet[2165]: E0209 08:55:44.338770 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.339931 kubelet[2165]: E0209 08:55:44.339913 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.340053 kubelet[2165]: W0209 08:55:44.340036 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.340146 kubelet[2165]: E0209 08:55:44.340132 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.340732 kubelet[2165]: E0209 08:55:44.340700 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.340909 kubelet[2165]: W0209 08:55:44.340882 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.341020 kubelet[2165]: E0209 08:55:44.341009 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.341880 kubelet[2165]: E0209 08:55:44.341858 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.341880 kubelet[2165]: W0209 08:55:44.341876 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.342001 kubelet[2165]: E0209 08:55:44.341908 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.342229 kubelet[2165]: E0209 08:55:44.342215 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.342229 kubelet[2165]: W0209 08:55:44.342227 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.342335 kubelet[2165]: E0209 08:55:44.342245 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.342482 kubelet[2165]: E0209 08:55:44.342453 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.342482 kubelet[2165]: W0209 08:55:44.342468 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.342482 kubelet[2165]: E0209 08:55:44.342484 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.342736 kubelet[2165]: E0209 08:55:44.342722 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.342736 kubelet[2165]: W0209 08:55:44.342733 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.342947 kubelet[2165]: E0209 08:55:44.342929 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.343226 kubelet[2165]: E0209 08:55:44.343178 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.343226 kubelet[2165]: W0209 08:55:44.343191 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.343226 kubelet[2165]: E0209 08:55:44.343204 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.343572 kubelet[2165]: E0209 08:55:44.343544 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.343631 kubelet[2165]: W0209 08:55:44.343578 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.343631 kubelet[2165]: E0209 08:55:44.343591 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.344324 kubelet[2165]: E0209 08:55:44.344296 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.344324 kubelet[2165]: W0209 08:55:44.344320 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.344438 kubelet[2165]: E0209 08:55:44.344335 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.431706 env[1192]: time="2024-02-09T08:55:44.431588094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d879d77b-pgjhk,Uid:bda6b8ce-d0d0-4c91-a727-6efedb4d66ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ed677cb3b371ab5e53aa8eb0d77c236255a8b32932188acc01aba8a2f02d0f8\"" Feb 9 08:55:44.433481 kubelet[2165]: E0209 08:55:44.433384 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:44.434993 kubelet[2165]: E0209 08:55:44.434345 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.434993 kubelet[2165]: W0209 08:55:44.434360 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.434993 kubelet[2165]: E0209 08:55:44.434384 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.434993 kubelet[2165]: E0209 08:55:44.434609 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.434993 kubelet[2165]: W0209 08:55:44.434617 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.434993 kubelet[2165]: E0209 08:55:44.434629 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.437254 env[1192]: time="2024-02-09T08:55:44.437201064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 08:55:44.535932 kubelet[2165]: E0209 08:55:44.535757 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.535932 kubelet[2165]: W0209 08:55:44.535779 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.535932 kubelet[2165]: E0209 08:55:44.535815 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.536321 kubelet[2165]: E0209 08:55:44.536242 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.536321 kubelet[2165]: W0209 08:55:44.536254 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.536321 kubelet[2165]: E0209 08:55:44.536292 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.567810 kubelet[2165]: E0209 08:55:44.567779 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.567998 kubelet[2165]: W0209 08:55:44.567980 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.568097 kubelet[2165]: E0209 08:55:44.568086 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.638068 kubelet[2165]: E0209 08:55:44.638025 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.638303 kubelet[2165]: W0209 08:55:44.638282 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.638432 kubelet[2165]: E0209 08:55:44.638417 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.640056 kubelet[2165]: E0209 08:55:44.640026 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:44.640642 env[1192]: time="2024-02-09T08:55:44.640592416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cxj6,Uid:58e67f8c-ff68-439b-917e-7d64a9374bdb,Namespace:calico-system,Attempt:0,}" Feb 9 08:55:44.673985 env[1192]: time="2024-02-09T08:55:44.673870051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:55:44.674295 env[1192]: time="2024-02-09T08:55:44.674254502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:55:44.674423 env[1192]: time="2024-02-09T08:55:44.674399038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:55:44.675631 env[1192]: time="2024-02-09T08:55:44.674780824Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171 pid=2688 runtime=io.containerd.runc.v2 Feb 9 08:55:44.732735 env[1192]: time="2024-02-09T08:55:44.732690810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2cxj6,Uid:58e67f8c-ff68-439b-917e-7d64a9374bdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\"" Feb 9 08:55:44.733954 kubelet[2165]: E0209 08:55:44.733786 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:44.739696 kubelet[2165]: E0209 08:55:44.739664 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.739696 kubelet[2165]: W0209 08:55:44.739687 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.739922 kubelet[2165]: E0209 08:55:44.739711 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:44.766212 kubelet[2165]: E0209 08:55:44.766167 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:44.766212 kubelet[2165]: W0209 08:55:44.766201 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:44.766212 kubelet[2165]: E0209 08:55:44.766227 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:45.088604 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 9 08:55:45.088774 kernel: audit: type=1325 audit(1707468945.082:294): table=filter:107 family=2 entries=14 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:45.082000 audit[2749]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:45.082000 audit[2749]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff8b2ecb50 a2=0 a3=7fff8b2ecb3c items=0 ppid=2381 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:45.095705 kernel: audit: type=1300 audit(1707468945.082:294): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff8b2ecb50 a2=0 a3=7fff8b2ecb3c items=0 ppid=2381 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:45.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:45.101592 kernel: audit: type=1327 audit(1707468945.082:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:45.084000 audit[2749]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:45.084000 audit[2749]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff8b2ecb50 a2=0 a3=7fff8b2ecb3c items=0 ppid=2381 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:45.112464 kernel: audit: type=1325 audit(1707468945.084:295): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:55:45.112635 kernel: audit: type=1300 audit(1707468945.084:295): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff8b2ecb50 a2=0 a3=7fff8b2ecb3c items=0 ppid=2381 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:55:45.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:45.118599 kernel: audit: type=1327 audit(1707468945.084:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:55:45.836738 kubelet[2165]: E0209 08:55:45.836688 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:46.267225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909820549.mount: Deactivated successfully. Feb 9 08:55:47.836823 kubelet[2165]: E0209 08:55:47.836730 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:48.607955 env[1192]: time="2024-02-09T08:55:48.607889580Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:48.610191 env[1192]: time="2024-02-09T08:55:48.610137880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:48.613736 env[1192]: time="2024-02-09T08:55:48.613608514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:48.622213 env[1192]: time="2024-02-09T08:55:48.622017189Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:48.632240 env[1192]: time="2024-02-09T08:55:48.632161348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 08:55:48.636341 env[1192]: time="2024-02-09T08:55:48.634284378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 08:55:48.672605 env[1192]: time="2024-02-09T08:55:48.668106103Z" level=info msg="CreateContainer within sandbox \"2ed677cb3b371ab5e53aa8eb0d77c236255a8b32932188acc01aba8a2f02d0f8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 08:55:48.779108 env[1192]: time="2024-02-09T08:55:48.776228658Z" level=info msg="CreateContainer within sandbox \"2ed677cb3b371ab5e53aa8eb0d77c236255a8b32932188acc01aba8a2f02d0f8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"baa849c605308f7a739dcc9a0449b051a3add9a97986dfbc951f9b94f5803984\"" Feb 9 08:55:48.780500 env[1192]: time="2024-02-09T08:55:48.780441359Z" level=info msg="StartContainer for \"baa849c605308f7a739dcc9a0449b051a3add9a97986dfbc951f9b94f5803984\"" Feb 9 08:55:48.901187 env[1192]: time="2024-02-09T08:55:48.901040874Z" level=info msg="StartContainer for \"baa849c605308f7a739dcc9a0449b051a3add9a97986dfbc951f9b94f5803984\" returns successfully" Feb 9 08:55:48.932540 kubelet[2165]: E0209 08:55:48.932498 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:48.941586 kubelet[2165]: E0209 08:55:48.941539 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.941586 kubelet[2165]: W0209 08:55:48.941581 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.941586 kubelet[2165]: E0209 08:55:48.941604 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.942282 kubelet[2165]: E0209 08:55:48.942262 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.942282 kubelet[2165]: W0209 08:55:48.942275 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.942445 kubelet[2165]: E0209 08:55:48.942301 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.942690 kubelet[2165]: E0209 08:55:48.942671 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.942690 kubelet[2165]: W0209 08:55:48.942684 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.942844 kubelet[2165]: E0209 08:55:48.942699 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.943070 kubelet[2165]: E0209 08:55:48.943054 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.943070 kubelet[2165]: W0209 08:55:48.943065 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.943186 kubelet[2165]: E0209 08:55:48.943075 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.943386 kubelet[2165]: E0209 08:55:48.943370 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.943386 kubelet[2165]: W0209 08:55:48.943380 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.943386 kubelet[2165]: E0209 08:55:48.943389 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.943690 kubelet[2165]: E0209 08:55:48.943676 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.943690 kubelet[2165]: W0209 08:55:48.943687 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.943809 kubelet[2165]: E0209 08:55:48.943696 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.944013 kubelet[2165]: E0209 08:55:48.943998 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.944013 kubelet[2165]: W0209 08:55:48.944008 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.944137 kubelet[2165]: E0209 08:55:48.944019 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.945316 kubelet[2165]: E0209 08:55:48.944997 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.945316 kubelet[2165]: W0209 08:55:48.945009 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.945316 kubelet[2165]: E0209 08:55:48.945020 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.945509 kubelet[2165]: E0209 08:55:48.945336 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.945509 kubelet[2165]: W0209 08:55:48.945346 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.945509 kubelet[2165]: E0209 08:55:48.945360 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.946264 kubelet[2165]: E0209 08:55:48.946245 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.946264 kubelet[2165]: W0209 08:55:48.946258 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.946264 kubelet[2165]: E0209 08:55:48.946271 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.949596 kubelet[2165]: E0209 08:55:48.946590 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.949596 kubelet[2165]: W0209 08:55:48.946600 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.949596 kubelet[2165]: E0209 08:55:48.946611 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.951594 kubelet[2165]: E0209 08:55:48.950945 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.951594 kubelet[2165]: W0209 08:55:48.950962 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.951594 kubelet[2165]: E0209 08:55:48.950981 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.958126 kubelet[2165]: I0209 08:55:48.957708 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-56d879d77b-pgjhk" podStartSLOduration=-9.223372030897123e+09 pod.CreationTimestamp="2024-02-09 08:55:43 +0000 UTC" firstStartedPulling="2024-02-09 08:55:44.43678499 +0000 UTC m=+20.015340183" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:48.956934631 +0000 UTC m=+24.535489877" watchObservedRunningTime="2024-02-09 08:55:48.957652565 +0000 UTC m=+24.536207788" Feb 9 08:55:48.975387 kubelet[2165]: E0209 08:55:48.975351 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.975654 kubelet[2165]: W0209 08:55:48.975627 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.975787 kubelet[2165]: E0209 08:55:48.975771 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.976156 kubelet[2165]: E0209 08:55:48.976139 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.976273 kubelet[2165]: W0209 08:55:48.976259 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.976371 kubelet[2165]: E0209 08:55:48.976359 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.976670 kubelet[2165]: E0209 08:55:48.976637 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.976670 kubelet[2165]: W0209 08:55:48.976664 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.976817 kubelet[2165]: E0209 08:55:48.976694 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.976897 kubelet[2165]: E0209 08:55:48.976880 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.976897 kubelet[2165]: W0209 08:55:48.976889 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.976897 kubelet[2165]: E0209 08:55:48.976899 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.977040 kubelet[2165]: E0209 08:55:48.977029 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.977040 kubelet[2165]: W0209 08:55:48.977036 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.977134 kubelet[2165]: E0209 08:55:48.977046 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.977219 kubelet[2165]: E0209 08:55:48.977202 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.977219 kubelet[2165]: W0209 08:55:48.977212 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.977219 kubelet[2165]: E0209 08:55:48.977225 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.977669 kubelet[2165]: E0209 08:55:48.977648 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.977806 kubelet[2165]: W0209 08:55:48.977787 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.977908 kubelet[2165]: E0209 08:55:48.977894 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.978618 kubelet[2165]: E0209 08:55:48.978601 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.978718 kubelet[2165]: W0209 08:55:48.978702 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.978814 kubelet[2165]: E0209 08:55:48.978802 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.979111 kubelet[2165]: E0209 08:55:48.979096 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.979216 kubelet[2165]: W0209 08:55:48.979201 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.979319 kubelet[2165]: E0209 08:55:48.979305 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.979542 kubelet[2165]: E0209 08:55:48.979525 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.979542 kubelet[2165]: W0209 08:55:48.979538 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.979699 kubelet[2165]: E0209 08:55:48.979556 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.979741 kubelet[2165]: E0209 08:55:48.979713 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.979741 kubelet[2165]: W0209 08:55:48.979719 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.979741 kubelet[2165]: E0209 08:55:48.979728 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.981152 kubelet[2165]: E0209 08:55:48.980986 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.981152 kubelet[2165]: W0209 08:55:48.980999 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.981152 kubelet[2165]: E0209 08:55:48.981017 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.982960 kubelet[2165]: E0209 08:55:48.982346 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.982960 kubelet[2165]: W0209 08:55:48.982361 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.982960 kubelet[2165]: E0209 08:55:48.982514 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.982960 kubelet[2165]: W0209 08:55:48.982520 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.982960 kubelet[2165]: E0209 08:55:48.982530 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.983224 kubelet[2165]: E0209 08:55:48.983207 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.984458 kubelet[2165]: E0209 08:55:48.983712 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.984458 kubelet[2165]: W0209 08:55:48.983724 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.984458 kubelet[2165]: E0209 08:55:48.983740 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.984458 kubelet[2165]: E0209 08:55:48.983909 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.984458 kubelet[2165]: W0209 08:55:48.983914 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.984458 kubelet[2165]: E0209 08:55:48.983922 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.985001 kubelet[2165]: E0209 08:55:48.984982 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.985001 kubelet[2165]: W0209 08:55:48.984995 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.985167 kubelet[2165]: E0209 08:55:48.985151 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:48.985167 kubelet[2165]: W0209 08:55:48.985161 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:48.985269 kubelet[2165]: E0209 08:55:48.985175 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:48.985340 kubelet[2165]: E0209 08:55:48.985152 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.655696 systemd[1]: run-containerd-runc-k8s.io-baa849c605308f7a739dcc9a0449b051a3add9a97986dfbc951f9b94f5803984-runc.hyr9Ir.mount: Deactivated successfully. Feb 9 08:55:49.836894 kubelet[2165]: E0209 08:55:49.836847 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:49.934425 kubelet[2165]: I0209 08:55:49.933995 2165 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 08:55:49.935630 kubelet[2165]: E0209 08:55:49.935531 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:49.959381 kubelet[2165]: E0209 08:55:49.959334 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.959381 kubelet[2165]: W0209 08:55:49.959364 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.959770 kubelet[2165]: E0209 08:55:49.959402 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.959830 kubelet[2165]: E0209 08:55:49.959785 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.959830 kubelet[2165]: W0209 08:55:49.959804 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.959830 kubelet[2165]: E0209 08:55:49.959822 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.960092 kubelet[2165]: E0209 08:55:49.960060 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.960131 kubelet[2165]: W0209 08:55:49.960091 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.960131 kubelet[2165]: E0209 08:55:49.960109 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.960363 kubelet[2165]: E0209 08:55:49.960348 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.960363 kubelet[2165]: W0209 08:55:49.960361 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.960446 kubelet[2165]: E0209 08:55:49.960377 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.960663 kubelet[2165]: E0209 08:55:49.960646 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.960715 kubelet[2165]: W0209 08:55:49.960664 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.960715 kubelet[2165]: E0209 08:55:49.960685 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.960866 kubelet[2165]: E0209 08:55:49.960852 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.960901 kubelet[2165]: W0209 08:55:49.960866 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.960901 kubelet[2165]: E0209 08:55:49.960881 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.961207 kubelet[2165]: E0209 08:55:49.961186 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.961264 kubelet[2165]: W0209 08:55:49.961203 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.961264 kubelet[2165]: E0209 08:55:49.961230 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.961477 kubelet[2165]: E0209 08:55:49.961460 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.961477 kubelet[2165]: W0209 08:55:49.961473 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.961649 kubelet[2165]: E0209 08:55:49.961492 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.961902 kubelet[2165]: E0209 08:55:49.961881 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.961902 kubelet[2165]: W0209 08:55:49.961900 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.961988 kubelet[2165]: E0209 08:55:49.961919 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.962175 kubelet[2165]: E0209 08:55:49.962159 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.962223 kubelet[2165]: W0209 08:55:49.962174 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.962223 kubelet[2165]: E0209 08:55:49.962190 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.962480 kubelet[2165]: E0209 08:55:49.962449 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.962480 kubelet[2165]: W0209 08:55:49.962468 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.962595 kubelet[2165]: E0209 08:55:49.962485 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.962775 kubelet[2165]: E0209 08:55:49.962749 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.962815 kubelet[2165]: W0209 08:55:49.962779 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.962815 kubelet[2165]: E0209 08:55:49.962796 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.984546 kubelet[2165]: E0209 08:55:49.984501 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.984546 kubelet[2165]: W0209 08:55:49.984536 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.984790 kubelet[2165]: E0209 08:55:49.984605 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.984950 kubelet[2165]: E0209 08:55:49.984932 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.984950 kubelet[2165]: W0209 08:55:49.984946 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.985117 kubelet[2165]: E0209 08:55:49.984966 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.985175 kubelet[2165]: E0209 08:55:49.985154 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.985175 kubelet[2165]: W0209 08:55:49.985161 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.985175 kubelet[2165]: E0209 08:55:49.985172 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.985393 kubelet[2165]: E0209 08:55:49.985380 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.985393 kubelet[2165]: W0209 08:55:49.985391 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.985488 kubelet[2165]: E0209 08:55:49.985408 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.985774 kubelet[2165]: E0209 08:55:49.985756 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.985774 kubelet[2165]: W0209 08:55:49.985770 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.985912 kubelet[2165]: E0209 08:55:49.985791 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.985965 kubelet[2165]: E0209 08:55:49.985955 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.986012 kubelet[2165]: W0209 08:55:49.985967 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.986012 kubelet[2165]: E0209 08:55:49.985983 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.986331 kubelet[2165]: E0209 08:55:49.986311 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.986331 kubelet[2165]: W0209 08:55:49.986329 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.986812 kubelet[2165]: E0209 08:55:49.986452 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.986812 kubelet[2165]: E0209 08:55:49.986537 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.986812 kubelet[2165]: W0209 08:55:49.986549 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.986812 kubelet[2165]: E0209 08:55:49.986759 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.987002 kubelet[2165]: E0209 08:55:49.986761 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.987002 kubelet[2165]: W0209 08:55:49.986859 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.987002 kubelet[2165]: E0209 08:55:49.986878 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.987088 kubelet[2165]: E0209 08:55:49.987079 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.987088 kubelet[2165]: W0209 08:55:49.987086 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.987149 kubelet[2165]: E0209 08:55:49.987097 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.987304 kubelet[2165]: E0209 08:55:49.987286 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.987304 kubelet[2165]: W0209 08:55:49.987300 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.987304 kubelet[2165]: E0209 08:55:49.987312 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.987530 kubelet[2165]: E0209 08:55:49.987516 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.987530 kubelet[2165]: W0209 08:55:49.987528 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.987646 kubelet[2165]: E0209 08:55:49.987549 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.987998 kubelet[2165]: E0209 08:55:49.987948 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.987998 kubelet[2165]: W0209 08:55:49.987963 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.987998 kubelet[2165]: E0209 08:55:49.987978 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.988285 kubelet[2165]: E0209 08:55:49.988265 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.988285 kubelet[2165]: W0209 08:55:49.988283 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.988506 kubelet[2165]: E0209 08:55:49.988310 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.988686 kubelet[2165]: E0209 08:55:49.988669 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.988746 kubelet[2165]: W0209 08:55:49.988687 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.988746 kubelet[2165]: E0209 08:55:49.988714 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.988954 kubelet[2165]: E0209 08:55:49.988938 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.989020 kubelet[2165]: W0209 08:55:49.988960 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.989020 kubelet[2165]: E0209 08:55:49.988977 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.989253 kubelet[2165]: E0209 08:55:49.989233 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.989253 kubelet[2165]: W0209 08:55:49.989254 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.989338 kubelet[2165]: E0209 08:55:49.989266 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:49.989832 kubelet[2165]: E0209 08:55:49.989811 2165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 08:55:49.989832 kubelet[2165]: W0209 08:55:49.989824 2165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 08:55:49.989958 kubelet[2165]: E0209 08:55:49.989838 2165 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 08:55:50.480498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071134074.mount: Deactivated successfully. Feb 9 08:55:51.836275 kubelet[2165]: E0209 08:55:51.836219 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:52.240459 env[1192]: time="2024-02-09T08:55:52.240407412Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:52.244259 env[1192]: time="2024-02-09T08:55:52.244213329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:52.247277 env[1192]: time="2024-02-09T08:55:52.247222470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:52.250769 env[1192]: time="2024-02-09T08:55:52.250711532Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:55:52.252632 env[1192]: time="2024-02-09T08:55:52.252546488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 08:55:52.258165 env[1192]: time="2024-02-09T08:55:52.257620654Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 08:55:52.273934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338711184.mount: Deactivated successfully. Feb 9 08:55:52.284769 env[1192]: time="2024-02-09T08:55:52.284714789Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214\"" Feb 9 08:55:52.287916 env[1192]: time="2024-02-09T08:55:52.287872781Z" level=info msg="StartContainer for \"31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214\"" Feb 9 08:55:52.328739 systemd[1]: run-containerd-runc-k8s.io-31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214-runc.rTGVTM.mount: Deactivated successfully. Feb 9 08:55:52.380921 env[1192]: time="2024-02-09T08:55:52.379421714Z" level=info msg="StartContainer for \"31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214\" returns successfully" Feb 9 08:55:52.478335 env[1192]: time="2024-02-09T08:55:52.478262749Z" level=info msg="shim disconnected" id=31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214 Feb 9 08:55:52.478335 env[1192]: time="2024-02-09T08:55:52.478332622Z" level=warning msg="cleaning up after shim disconnected" id=31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214 namespace=k8s.io Feb 9 08:55:52.478335 env[1192]: time="2024-02-09T08:55:52.478347311Z" level=info msg="cleaning up dead shim" Feb 9 08:55:52.494397 env[1192]: time="2024-02-09T08:55:52.493599444Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:55:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2907 runtime=io.containerd.runc.v2\n" Feb 9 08:55:52.943199 kubelet[2165]: E0209 08:55:52.942975 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:55:52.950183 env[1192]: time="2024-02-09T08:55:52.950117857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 08:55:53.269774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f126a74ca866900efaa1357be58e378f07f48408d921787c6145270ff86214-rootfs.mount: Deactivated successfully. Feb 9 08:55:53.835975 kubelet[2165]: E0209 08:55:53.835930 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:55.009121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174997456.mount: Deactivated successfully. Feb 9 08:55:55.836440 kubelet[2165]: E0209 08:55:55.836359 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:57.836325 kubelet[2165]: E0209 08:55:57.836272 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:55:59.836275 kubelet[2165]: E0209 08:55:59.835904 2165 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:56:00.026494 env[1192]: time="2024-02-09T08:56:00.026420975Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:00.135936 env[1192]: time="2024-02-09T08:56:00.135429231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:00.139417 env[1192]: time="2024-02-09T08:56:00.139361291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:00.141967 env[1192]: time="2024-02-09T08:56:00.141930688Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:00.144336 env[1192]: time="2024-02-09T08:56:00.143335895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 08:56:00.150637 env[1192]: time="2024-02-09T08:56:00.150551982Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 08:56:00.166584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992597867.mount: Deactivated successfully. Feb 9 08:56:00.178102 env[1192]: time="2024-02-09T08:56:00.177991238Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216\"" Feb 9 08:56:00.179537 env[1192]: time="2024-02-09T08:56:00.179465607Z" level=info msg="StartContainer for \"ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216\"" Feb 9 08:56:00.259818 env[1192]: time="2024-02-09T08:56:00.259754364Z" level=info msg="StartContainer for \"ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216\" returns successfully" Feb 9 08:56:00.957757 env[1192]: time="2024-02-09T08:56:00.957689975Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 08:56:00.972929 kubelet[2165]: I0209 08:56:00.972890 2165 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 08:56:00.974026 kubelet[2165]: E0209 08:56:00.974005 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:01.008679 kubelet[2165]: I0209 08:56:01.007391 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:56:01.014780 kubelet[2165]: I0209 08:56:01.014740 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:56:01.015008 kubelet[2165]: I0209 08:56:01.014927 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:56:01.046862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216-rootfs.mount: Deactivated successfully. Feb 9 08:56:01.054302 env[1192]: time="2024-02-09T08:56:01.053931925Z" level=info msg="shim disconnected" id=ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216 Feb 9 08:56:01.054302 env[1192]: time="2024-02-09T08:56:01.053980529Z" level=warning msg="cleaning up after shim disconnected" id=ac783e5ab68ab7aa11fdb2d8408215d8615d0e29ddd9392a65593ced96b29216 namespace=k8s.io Feb 9 08:56:01.054302 env[1192]: time="2024-02-09T08:56:01.053991398Z" level=info msg="cleaning up dead shim" Feb 9 08:56:01.070163 env[1192]: time="2024-02-09T08:56:01.070104899Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:56:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2984 runtime=io.containerd.runc.v2\n" Feb 9 08:56:01.073886 kubelet[2165]: I0209 08:56:01.073834 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1409cbc3-f199-4f73-86e3-3a904676c00d-tigera-ca-bundle\") pod \"calico-kube-controllers-7ffc8f9f79-lg8qn\" (UID: \"1409cbc3-f199-4f73-86e3-3a904676c00d\") " pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" Feb 9 08:56:01.074315 kubelet[2165]: I0209 08:56:01.074282 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xmd\" (UniqueName: \"kubernetes.io/projected/684469d9-8de5-4c0c-b081-0fa23de4f0b8-kube-api-access-64xmd\") pod \"coredns-787d4945fb-bzf2s\" (UID: \"684469d9-8de5-4c0c-b081-0fa23de4f0b8\") " pod="kube-system/coredns-787d4945fb-bzf2s" Feb 9 08:56:01.074492 kubelet[2165]: I0209 08:56:01.074462 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfa1b0d1-4987-4959-b29e-00a21e795aca-config-volume\") pod \"coredns-787d4945fb-nvkp8\" (UID: \"bfa1b0d1-4987-4959-b29e-00a21e795aca\") " pod="kube-system/coredns-787d4945fb-nvkp8" Feb 9 08:56:01.074716 kubelet[2165]: I0209 08:56:01.074702 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdzx8\" (UniqueName: \"kubernetes.io/projected/1409cbc3-f199-4f73-86e3-3a904676c00d-kube-api-access-pdzx8\") pod \"calico-kube-controllers-7ffc8f9f79-lg8qn\" (UID: \"1409cbc3-f199-4f73-86e3-3a904676c00d\") " pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" Feb 9 08:56:01.075197 kubelet[2165]: I0209 08:56:01.075181 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6snv\" (UniqueName: \"kubernetes.io/projected/bfa1b0d1-4987-4959-b29e-00a21e795aca-kube-api-access-v6snv\") pod \"coredns-787d4945fb-nvkp8\" (UID: \"bfa1b0d1-4987-4959-b29e-00a21e795aca\") " pod="kube-system/coredns-787d4945fb-nvkp8" Feb 9 08:56:01.075544 kubelet[2165]: I0209 08:56:01.075351 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684469d9-8de5-4c0c-b081-0fa23de4f0b8-config-volume\") pod \"coredns-787d4945fb-bzf2s\" (UID: \"684469d9-8de5-4c0c-b081-0fa23de4f0b8\") " pod="kube-system/coredns-787d4945fb-bzf2s" Feb 9 08:56:01.311248 kubelet[2165]: E0209 08:56:01.311194 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:01.313967 env[1192]: time="2024-02-09T08:56:01.313907846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nvkp8,Uid:bfa1b0d1-4987-4959-b29e-00a21e795aca,Namespace:kube-system,Attempt:0,}" Feb 9 08:56:01.337743 kubelet[2165]: E0209 08:56:01.336528 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:01.340888 env[1192]: time="2024-02-09T08:56:01.340842027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bzf2s,Uid:684469d9-8de5-4c0c-b081-0fa23de4f0b8,Namespace:kube-system,Attempt:0,}" Feb 9 08:56:01.455519 env[1192]: time="2024-02-09T08:56:01.455433554Z" level=error msg="Failed to destroy network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.455945 env[1192]: time="2024-02-09T08:56:01.455904346Z" level=error msg="encountered an error cleaning up failed sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.456046 env[1192]: time="2024-02-09T08:56:01.455966548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nvkp8,Uid:bfa1b0d1-4987-4959-b29e-00a21e795aca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.456705 kubelet[2165]: E0209 08:56:01.456313 2165 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.456705 kubelet[2165]: E0209 08:56:01.456434 2165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-nvkp8" Feb 9 08:56:01.456705 kubelet[2165]: E0209 08:56:01.456496 2165 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-nvkp8" Feb 9 08:56:01.457006 kubelet[2165]: E0209 08:56:01.456667 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-nvkp8_kube-system(bfa1b0d1-4987-4959-b29e-00a21e795aca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-nvkp8_kube-system(bfa1b0d1-4987-4959-b29e-00a21e795aca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-nvkp8" podUID=bfa1b0d1-4987-4959-b29e-00a21e795aca Feb 9 08:56:01.473864 env[1192]: time="2024-02-09T08:56:01.473795798Z" level=error msg="Failed to destroy network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.474257 env[1192]: time="2024-02-09T08:56:01.474220948Z" level=error msg="encountered an error cleaning up failed sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.474334 env[1192]: time="2024-02-09T08:56:01.474292558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bzf2s,Uid:684469d9-8de5-4c0c-b081-0fa23de4f0b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.474662 kubelet[2165]: E0209 08:56:01.474633 2165 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.474756 kubelet[2165]: E0209 08:56:01.474705 2165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-bzf2s" Feb 9 08:56:01.474756 kubelet[2165]: E0209 08:56:01.474732 2165 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-bzf2s" Feb 9 08:56:01.474833 kubelet[2165]: E0209 08:56:01.474800 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-bzf2s_kube-system(684469d9-8de5-4c0c-b081-0fa23de4f0b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-bzf2s_kube-system(684469d9-8de5-4c0c-b081-0fa23de4f0b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-bzf2s" podUID=684469d9-8de5-4c0c-b081-0fa23de4f0b8 Feb 9 08:56:01.627774 env[1192]: time="2024-02-09T08:56:01.627079273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ffc8f9f79-lg8qn,Uid:1409cbc3-f199-4f73-86e3-3a904676c00d,Namespace:calico-system,Attempt:0,}" Feb 9 08:56:01.710906 env[1192]: time="2024-02-09T08:56:01.710822930Z" level=error msg="Failed to destroy network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.711797 env[1192]: time="2024-02-09T08:56:01.711705242Z" level=error msg="encountered an error cleaning up failed sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.712106 env[1192]: time="2024-02-09T08:56:01.712054450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ffc8f9f79-lg8qn,Uid:1409cbc3-f199-4f73-86e3-3a904676c00d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.712890 kubelet[2165]: E0209 08:56:01.712636 2165 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.712890 kubelet[2165]: E0209 08:56:01.712730 2165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" Feb 9 08:56:01.712890 kubelet[2165]: E0209 08:56:01.712768 2165 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" Feb 9 08:56:01.713719 kubelet[2165]: E0209 08:56:01.713287 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ffc8f9f79-lg8qn_calico-system(1409cbc3-f199-4f73-86e3-3a904676c00d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ffc8f9f79-lg8qn_calico-system(1409cbc3-f199-4f73-86e3-3a904676c00d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" podUID=1409cbc3-f199-4f73-86e3-3a904676c00d Feb 9 08:56:01.842616 env[1192]: time="2024-02-09T08:56:01.841431300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kk2hr,Uid:a86e9fee-b3a3-441d-8e06-482d03abae6a,Namespace:calico-system,Attempt:0,}" Feb 9 08:56:01.916102 env[1192]: time="2024-02-09T08:56:01.915623863Z" level=error msg="Failed to destroy network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.916671 env[1192]: time="2024-02-09T08:56:01.916626801Z" level=error msg="encountered an error cleaning up failed sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.916830 env[1192]: time="2024-02-09T08:56:01.916802223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kk2hr,Uid:a86e9fee-b3a3-441d-8e06-482d03abae6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.917708 kubelet[2165]: E0209 08:56:01.917134 2165 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:01.917708 kubelet[2165]: E0209 08:56:01.917197 2165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:56:01.917708 kubelet[2165]: E0209 08:56:01.917221 2165 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kk2hr" Feb 9 08:56:01.919543 kubelet[2165]: E0209 08:56:01.917285 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kk2hr_calico-system(a86e9fee-b3a3-441d-8e06-482d03abae6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kk2hr_calico-system(a86e9fee-b3a3-441d-8e06-482d03abae6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:56:01.965942 kubelet[2165]: I0209 08:56:01.964306 2165 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:01.967942 env[1192]: time="2024-02-09T08:56:01.967273740Z" level=info msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" Feb 9 08:56:01.970097 kubelet[2165]: I0209 08:56:01.970066 2165 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:01.973132 kubelet[2165]: E0209 08:56:01.973089 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:01.975231 env[1192]: time="2024-02-09T08:56:01.975181440Z" level=info msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" Feb 9 08:56:01.975871 env[1192]: time="2024-02-09T08:56:01.975818074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 08:56:01.978709 kubelet[2165]: I0209 08:56:01.977111 2165 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:01.980936 env[1192]: time="2024-02-09T08:56:01.980896194Z" level=info msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" Feb 9 08:56:01.982850 kubelet[2165]: I0209 08:56:01.981730 2165 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:01.984820 env[1192]: time="2024-02-09T08:56:01.983013575Z" level=info msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" Feb 9 08:56:02.070087 env[1192]: time="2024-02-09T08:56:02.069971947Z" level=error msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" failed" error="failed to destroy network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:02.070970 kubelet[2165]: E0209 08:56:02.070757 2165 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:02.070970 kubelet[2165]: E0209 08:56:02.070848 2165 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd} Feb 9 08:56:02.070970 kubelet[2165]: E0209 08:56:02.070902 2165 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a86e9fee-b3a3-441d-8e06-482d03abae6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 08:56:02.070970 kubelet[2165]: E0209 08:56:02.070938 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a86e9fee-b3a3-441d-8e06-482d03abae6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kk2hr" podUID=a86e9fee-b3a3-441d-8e06-482d03abae6a Feb 9 08:56:02.072353 env[1192]: time="2024-02-09T08:56:02.072291152Z" level=error msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" failed" error="failed to destroy network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:02.073113 kubelet[2165]: E0209 08:56:02.072856 2165 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:02.073113 kubelet[2165]: E0209 08:56:02.072934 2165 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce} Feb 9 08:56:02.073113 kubelet[2165]: E0209 08:56:02.073011 2165 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"684469d9-8de5-4c0c-b081-0fa23de4f0b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 08:56:02.073113 kubelet[2165]: E0209 08:56:02.073086 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"684469d9-8de5-4c0c-b081-0fa23de4f0b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-bzf2s" podUID=684469d9-8de5-4c0c-b081-0fa23de4f0b8 Feb 9 08:56:02.091745 env[1192]: time="2024-02-09T08:56:02.091677312Z" level=error msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" failed" error="failed to destroy network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:02.092548 kubelet[2165]: E0209 08:56:02.092351 2165 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:02.092548 kubelet[2165]: E0209 08:56:02.092412 2165 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf} Feb 9 08:56:02.092548 kubelet[2165]: E0209 08:56:02.092450 2165 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfa1b0d1-4987-4959-b29e-00a21e795aca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 08:56:02.092548 kubelet[2165]: E0209 08:56:02.092502 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfa1b0d1-4987-4959-b29e-00a21e795aca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-nvkp8" podUID=bfa1b0d1-4987-4959-b29e-00a21e795aca Feb 9 08:56:02.106000 env[1192]: time="2024-02-09T08:56:02.105924346Z" level=error msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" failed" error="failed to destroy network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 08:56:02.106668 kubelet[2165]: E0209 08:56:02.106447 2165 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:02.106668 kubelet[2165]: E0209 08:56:02.106509 2165 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046} Feb 9 08:56:02.106668 kubelet[2165]: E0209 08:56:02.106557 2165 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1409cbc3-f199-4f73-86e3-3a904676c00d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 08:56:02.106668 kubelet[2165]: E0209 08:56:02.106623 2165 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1409cbc3-f199-4f73-86e3-3a904676c00d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" podUID=1409cbc3-f199-4f73-86e3-3a904676c00d Feb 9 08:56:02.197824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce-shm.mount: Deactivated successfully. Feb 9 08:56:02.198008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf-shm.mount: Deactivated successfully. Feb 9 08:56:10.043348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382230272.mount: Deactivated successfully. Feb 9 08:56:10.125519 kubelet[2165]: I0209 08:56:10.124911 2165 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 08:56:10.128747 kubelet[2165]: E0209 08:56:10.126954 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:10.253000 audit[3231]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:10.264214 kernel: audit: type=1325 audit(1707468970.253:296): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:10.267787 kernel: audit: type=1300 audit(1707468970.253:296): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffd48ff1ae0 a2=0 a3=7ffd48ff1acc items=0 ppid=2381 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:10.267872 kernel: audit: type=1327 audit(1707468970.253:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:10.253000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffd48ff1ae0 a2=0 a3=7ffd48ff1acc items=0 ppid=2381 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:10.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:10.263000 audit[3231]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:10.263000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffd48ff1ae0 a2=0 a3=7ffd48ff1acc items=0 ppid=2381 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:10.277099 kernel: audit: type=1325 audit(1707468970.263:297): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:10.277291 kernel: audit: type=1300 audit(1707468970.263:297): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffd48ff1ae0 a2=0 a3=7ffd48ff1acc items=0 ppid=2381 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:10.277350 kernel: audit: type=1327 audit(1707468970.263:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:10.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:10.310215 env[1192]: time="2024-02-09T08:56:10.310084665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:10.312945 env[1192]: time="2024-02-09T08:56:10.312891250Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:10.314871 env[1192]: time="2024-02-09T08:56:10.314830964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:10.316405 env[1192]: time="2024-02-09T08:56:10.316370912Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:10.317325 env[1192]: time="2024-02-09T08:56:10.317280891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 08:56:10.336573 env[1192]: time="2024-02-09T08:56:10.336479211Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 08:56:10.353075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274801238.mount: Deactivated successfully. Feb 9 08:56:10.361244 env[1192]: time="2024-02-09T08:56:10.361174941Z" level=info msg="CreateContainer within sandbox \"4d79be970cb1ad0ee750ff37d47d2a93bb90b92b32b187a63e5c2a5557587171\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5\"" Feb 9 08:56:10.363955 env[1192]: time="2024-02-09T08:56:10.363896538Z" level=info msg="StartContainer for \"f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5\"" Feb 9 08:56:10.437714 env[1192]: time="2024-02-09T08:56:10.437663800Z" level=info msg="StartContainer for \"f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5\" returns successfully" Feb 9 08:56:10.599751 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 08:56:10.599976 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 08:56:11.060145 kubelet[2165]: E0209 08:56:11.060103 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:11.062154 kubelet[2165]: E0209 08:56:11.062127 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:11.094722 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.Ss5AZQ.mount: Deactivated successfully. Feb 9 08:56:12.061783 kubelet[2165]: E0209 08:56:12.061757 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:12.085047 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.GaUMcX.mount: Deactivated successfully. Feb 9 08:56:12.199000 audit[3392]: AVC avc: denied { write } for pid=3392 comm="tee" name="fd" dev="proc" ino=24918 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.204600 kernel: audit: type=1400 audit(1707468972.199:298): avc: denied { write } for pid=3392 comm="tee" name="fd" dev="proc" ino=24918 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.215000 audit[3388]: AVC avc: denied { write } for pid=3388 comm="tee" name="fd" dev="proc" ino=25757 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.215000 audit[3388]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffebdd895e a2=241 a3=1b6 items=1 ppid=3349 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.224288 kernel: audit: type=1400 audit(1707468972.215:299): avc: denied { write } for pid=3388 comm="tee" name="fd" dev="proc" ino=25757 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.224436 kernel: audit: type=1300 audit(1707468972.215:299): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffebdd895e a2=241 a3=1b6 items=1 ppid=3349 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.215000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 08:56:12.230595 kernel: audit: type=1307 audit(1707468972.215:299): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 08:56:12.215000 audit: PATH item=0 name="/dev/fd/63" inode=25740 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.199000 audit[3392]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc9599a970 a2=241 a3=1b6 items=1 ppid=3352 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.199000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 08:56:12.199000 audit: PATH item=0 name="/dev/fd/63" inode=25743 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.227000 audit[3399]: AVC avc: denied { write } for pid=3399 comm="tee" name="fd" dev="proc" ino=25771 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.227000 audit[3399]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6f93e96e a2=241 a3=1b6 items=1 ppid=3348 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.227000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 08:56:12.227000 audit: PATH item=0 name="/dev/fd/63" inode=25752 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.232000 audit[3410]: AVC avc: denied { write } for pid=3410 comm="tee" name="fd" dev="proc" ino=25775 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.232000 audit[3410]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6a70b96e a2=241 a3=1b6 items=1 ppid=3354 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.232000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 08:56:12.232000 audit: PATH item=0 name="/dev/fd/63" inode=25764 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.236000 audit[3413]: AVC avc: denied { write } for pid=3413 comm="tee" name="fd" dev="proc" ino=25779 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.236000 audit[3413]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda0d6296f a2=241 a3=1b6 items=1 ppid=3355 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.236000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 08:56:12.236000 audit: PATH item=0 name="/dev/fd/63" inode=25767 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.261000 audit[3419]: AVC avc: denied { write } for pid=3419 comm="tee" name="fd" dev="proc" ino=24934 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.261000 audit[3419]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1b12096e a2=241 a3=1b6 items=1 ppid=3356 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.261000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 08:56:12.261000 audit: PATH item=0 name="/dev/fd/63" inode=25783 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.270000 audit[3421]: AVC avc: denied { write } for pid=3421 comm="tee" name="fd" dev="proc" ino=25791 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 08:56:12.270000 audit[3421]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff9cff195f a2=241 a3=1b6 items=1 ppid=3359 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.270000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 08:56:12.270000 audit: PATH item=0 name="/dev/fd/63" inode=24931 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 08:56:12.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.808000 audit: BPF prog-id=10 op=LOAD Feb 9 08:56:12.808000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea5e51b70 a2=70 a3=7f4bae70a000 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.808000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit: BPF prog-id=10 op=UNLOAD Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit: BPF prog-id=11 op=LOAD Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea5e51b70 a2=70 a3=6e items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit: BPF prog-id=11 op=UNLOAD Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffea5e51b20 a2=70 a3=7ffea5e51b70 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit: BPF prog-id=12 op=LOAD Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffea5e51b00 a2=70 a3=7ffea5e51b70 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit: BPF prog-id=12 op=UNLOAD Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5e51be0 a2=70 a3=0 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5e51bd0 a2=70 a3=0 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.809000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.809000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffea5e51c10 a2=70 a3=0 items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { perfmon } for pid=3486 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit[3486]: AVC avc: denied { bpf } for pid=3486 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.811000 audit: BPF prog-id=13 op=LOAD Feb 9 08:56:12.811000 audit[3486]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffea5e51b30 a2=70 a3=ffffffff items=0 ppid=3358 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 08:56:12.817000 audit[3491]: AVC avc: denied { bpf } for pid=3491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.817000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f4a2920 a2=70 a3=fff80800 items=0 ppid=3358 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.817000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 08:56:12.818000 audit[3491]: AVC avc: denied { bpf } for pid=3491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 08:56:12.818000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc7f4a27f0 a2=70 a3=3 items=0 ppid=3358 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.818000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 08:56:12.824000 audit: BPF prog-id=13 op=UNLOAD Feb 9 08:56:12.903000 audit[3516]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=3516 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:12.903000 audit[3516]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fff49bad840 a2=0 a3=7fff49bad82c items=0 ppid=3358 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.903000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:12.907000 audit[3514]: NETFILTER_CFG table=raw:112 family=2 entries=19 op=nft_register_chain pid=3514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:12.907000 audit[3514]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffda9ecbc20 a2=0 a3=5574fee60000 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.907000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:12.913000 audit[3517]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_chain pid=3517 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:12.913000 audit[3517]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fffeb1cd1f0 a2=0 a3=5610cd258000 items=0 ppid=3358 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:12.916000 audit[3515]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:12.916000 audit[3515]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffd6cb99700 a2=0 a3=55666a85b000 items=0 ppid=3358 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:12.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:13.064441 kubelet[2165]: E0209 08:56:13.063956 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:13.085376 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.HAOO2Z.mount: Deactivated successfully. Feb 9 08:56:13.473375 systemd-networkd[1059]: vxlan.calico: Link UP Feb 9 08:56:13.473384 systemd-networkd[1059]: vxlan.calico: Gained carrier Feb 9 08:56:13.836943 env[1192]: time="2024-02-09T08:56:13.836896174Z" level=info msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" Feb 9 08:56:13.914457 kubelet[2165]: I0209 08:56:13.913906 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2cxj6" podStartSLOduration=-9.223372006943739e+09 pod.CreationTimestamp="2024-02-09 08:55:44 +0000 UTC" firstStartedPulling="2024-02-09 08:55:44.734915111 +0000 UTC m=+20.313470307" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:11.100387772 +0000 UTC m=+46.678942992" watchObservedRunningTime="2024-02-09 08:56:13.911036644 +0000 UTC m=+49.489591854" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.908 [INFO][3569] k8s.go 578: Cleaning up netns ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.910 [INFO][3569] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" iface="eth0" netns="/var/run/netns/cni-2ba3ce36-ff02-1b6f-e5a5-3efa76108587" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.911 [INFO][3569] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" iface="eth0" netns="/var/run/netns/cni-2ba3ce36-ff02-1b6f-e5a5-3efa76108587" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.912 [INFO][3569] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" iface="eth0" netns="/var/run/netns/cni-2ba3ce36-ff02-1b6f-e5a5-3efa76108587" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.912 [INFO][3569] k8s.go 585: Releasing IP address(es) ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:13.912 [INFO][3569] utils.go 188: Calico CNI releasing IP address ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.021 [INFO][3575] ipam_plugin.go 415: Releasing address using handleID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.023 [INFO][3575] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.023 [INFO][3575] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.035 [WARNING][3575] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.035 [INFO][3575] ipam_plugin.go 443: Releasing address using workloadID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.039 [INFO][3575] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:14.044724 env[1192]: 2024-02-09 08:56:14.041 [INFO][3569] k8s.go 591: Teardown processing complete. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:14.045631 env[1192]: time="2024-02-09T08:56:14.045587923Z" level=info msg="TearDown network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" successfully" Feb 9 08:56:14.045760 env[1192]: time="2024-02-09T08:56:14.045736382Z" level=info msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" returns successfully" Feb 9 08:56:14.051269 env[1192]: time="2024-02-09T08:56:14.049170175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ffc8f9f79-lg8qn,Uid:1409cbc3-f199-4f73-86e3-3a904676c00d,Namespace:calico-system,Attempt:1,}" Feb 9 08:56:14.050211 systemd[1]: run-netns-cni\x2d2ba3ce36\x2dff02\x2d1b6f\x2de5a5\x2d3efa76108587.mount: Deactivated successfully. Feb 9 08:56:14.262167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid9a91e18c28: link becomes ready Feb 9 08:56:14.265326 systemd-networkd[1059]: calid9a91e18c28: Link UP Feb 9 08:56:14.265683 systemd-networkd[1059]: calid9a91e18c28: Gained carrier Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.116 [INFO][3581] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0 calico-kube-controllers-7ffc8f9f79- calico-system 1409cbc3-f199-4f73-86e3-3a904676c00d 742 0 2024-02-09 08:55:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ffc8f9f79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b calico-kube-controllers-7ffc8f9f79-lg8qn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid9a91e18c28 [] []}} ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.116 [INFO][3581] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.158 [INFO][3593] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" HandleID="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.175 [INFO][3593] ipam_plugin.go 268: Auto assigning IP ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" HandleID="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027dad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"calico-kube-controllers-7ffc8f9f79-lg8qn", "timestamp":"2024-02-09 08:56:14.15823141 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.175 [INFO][3593] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.175 [INFO][3593] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.175 [INFO][3593] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.178 [INFO][3593] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.188 [INFO][3593] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.196 [INFO][3593] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.200 [INFO][3593] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.203 [INFO][3593] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.203 [INFO][3593] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.206 [INFO][3593] ipam.go 1682: Creating new handle: k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.211 [INFO][3593] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.225 [INFO][3593] ipam.go 1216: Successfully claimed IPs: [192.168.70.193/26] block=192.168.70.192/26 handle="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.227 [INFO][3593] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.193/26] handle="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.227 [INFO][3593] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:14.311705 env[1192]: 2024-02-09 08:56:14.227 [INFO][3593] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.193/26] IPv6=[] ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" HandleID="k8s-pod-network.c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.252 [INFO][3581] k8s.go 385: Populated endpoint ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0", GenerateName:"calico-kube-controllers-7ffc8f9f79-", Namespace:"calico-system", SelfLink:"", UID:"1409cbc3-f199-4f73-86e3-3a904676c00d", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ffc8f9f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"calico-kube-controllers-7ffc8f9f79-lg8qn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9a91e18c28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.253 [INFO][3581] k8s.go 386: Calico CNI using IPs: [192.168.70.193/32] ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.253 [INFO][3581] dataplane_linux.go 68: Setting the host side veth name to calid9a91e18c28 ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.259 [INFO][3581] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.268 [INFO][3581] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0", GenerateName:"calico-kube-controllers-7ffc8f9f79-", Namespace:"calico-system", SelfLink:"", UID:"1409cbc3-f199-4f73-86e3-3a904676c00d", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ffc8f9f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b", Pod:"calico-kube-controllers-7ffc8f9f79-lg8qn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9a91e18c28", MAC:"f6:61:4b:b8:c7:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:14.314105 env[1192]: 2024-02-09 08:56:14.290 [INFO][3581] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b" Namespace="calico-system" Pod="calico-kube-controllers-7ffc8f9f79-lg8qn" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:14.348512 env[1192]: time="2024-02-09T08:56:14.348320463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:14.349177 env[1192]: time="2024-02-09T08:56:14.349108297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:14.349658 env[1192]: time="2024-02-09T08:56:14.349504503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:14.350316 env[1192]: time="2024-02-09T08:56:14.350225948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b pid=3621 runtime=io.containerd.runc.v2 Feb 9 08:56:14.403000 audit[3642]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:14.403000 audit[3642]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffc89bc1ce0 a2=0 a3=7ffc89bc1ccc items=0 ppid=3358 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:14.403000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:14.414038 systemd[1]: run-containerd-runc-k8s.io-c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b-runc.tZn0NV.mount: Deactivated successfully. Feb 9 08:56:14.519179 env[1192]: time="2024-02-09T08:56:14.519047973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ffc8f9f79-lg8qn,Uid:1409cbc3-f199-4f73-86e3-3a904676c00d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b\"" Feb 9 08:56:14.522404 env[1192]: time="2024-02-09T08:56:14.522364999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 08:56:14.641240 systemd-networkd[1059]: vxlan.calico: Gained IPv6LL Feb 9 08:56:14.839378 env[1192]: time="2024-02-09T08:56:14.839242408Z" level=info msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" Feb 9 08:56:14.840493 env[1192]: time="2024-02-09T08:56:14.840190326Z" level=info msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.981 [INFO][3687] k8s.go 578: Cleaning up netns ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.986 [INFO][3687] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" iface="eth0" netns="/var/run/netns/cni-09c60f11-4f6d-eb89-15aa-d0f5e5b9d53f" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.986 [INFO][3687] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" iface="eth0" netns="/var/run/netns/cni-09c60f11-4f6d-eb89-15aa-d0f5e5b9d53f" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.987 [INFO][3687] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" iface="eth0" netns="/var/run/netns/cni-09c60f11-4f6d-eb89-15aa-d0f5e5b9d53f" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.987 [INFO][3687] k8s.go 585: Releasing IP address(es) ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:14.987 [INFO][3687] utils.go 188: Calico CNI releasing IP address ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.099 [INFO][3700] ipam_plugin.go 415: Releasing address using handleID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.119 [INFO][3700] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.120 [INFO][3700] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.139 [WARNING][3700] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.139 [INFO][3700] ipam_plugin.go 443: Releasing address using workloadID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.142 [INFO][3700] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:15.171611 env[1192]: 2024-02-09 08:56:15.151 [INFO][3687] k8s.go 591: Teardown processing complete. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:15.171611 env[1192]: time="2024-02-09T08:56:15.163705070Z" level=info msg="TearDown network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" successfully" Feb 9 08:56:15.171611 env[1192]: time="2024-02-09T08:56:15.163761557Z" level=info msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" returns successfully" Feb 9 08:56:15.171611 env[1192]: time="2024-02-09T08:56:15.166082099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nvkp8,Uid:bfa1b0d1-4987-4959-b29e-00a21e795aca,Namespace:kube-system,Attempt:1,}" Feb 9 08:56:15.172538 kubelet[2165]: E0209 08:56:15.165359 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:15.163193 systemd[1]: run-netns-cni\x2d09c60f11\x2d4f6d\x2deb89\x2d15aa\x2dd0f5e5b9d53f.mount: Deactivated successfully. Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.984 [INFO][3688] k8s.go 578: Cleaning up netns ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.984 [INFO][3688] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" iface="eth0" netns="/var/run/netns/cni-6d934480-abe6-559e-42e8-4d0bc258dddd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.985 [INFO][3688] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" iface="eth0" netns="/var/run/netns/cni-6d934480-abe6-559e-42e8-4d0bc258dddd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.985 [INFO][3688] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" iface="eth0" netns="/var/run/netns/cni-6d934480-abe6-559e-42e8-4d0bc258dddd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.985 [INFO][3688] k8s.go 585: Releasing IP address(es) ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:14.985 [INFO][3688] utils.go 188: Calico CNI releasing IP address ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.128 [INFO][3699] ipam_plugin.go 415: Releasing address using handleID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.128 [INFO][3699] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.142 [INFO][3699] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.160 [WARNING][3699] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.160 [INFO][3699] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.168 [INFO][3699] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:15.186793 env[1192]: 2024-02-09 08:56:15.177 [INFO][3688] k8s.go 591: Teardown processing complete. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:15.186793 env[1192]: time="2024-02-09T08:56:15.182573024Z" level=info msg="TearDown network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" successfully" Feb 9 08:56:15.186793 env[1192]: time="2024-02-09T08:56:15.182609738Z" level=info msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" returns successfully" Feb 9 08:56:15.186793 env[1192]: time="2024-02-09T08:56:15.183278867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kk2hr,Uid:a86e9fee-b3a3-441d-8e06-482d03abae6a,Namespace:calico-system,Attempt:1,}" Feb 9 08:56:15.360337 systemd[1]: run-netns-cni\x2d6d934480\x2dabe6\x2d559e\x2d42e8\x2d4d0bc258dddd.mount: Deactivated successfully. Feb 9 08:56:15.452175 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 08:56:15.452381 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1d1b3671538: link becomes ready Feb 9 08:56:15.451506 systemd-networkd[1059]: cali1d1b3671538: Link UP Feb 9 08:56:15.451780 systemd-networkd[1059]: cali1d1b3671538: Gained carrier Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.300 [INFO][3712] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0 coredns-787d4945fb- kube-system bfa1b0d1-4987-4959-b29e-00a21e795aca 750 0 2024-02-09 08:55:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b coredns-787d4945fb-nvkp8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d1b3671538 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.301 [INFO][3712] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.370 [INFO][3735] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" HandleID="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.387 [INFO][3735] ipam_plugin.go 268: Auto assigning IP ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" HandleID="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027cc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"coredns-787d4945fb-nvkp8", "timestamp":"2024-02-09 08:56:15.370878084 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.388 [INFO][3735] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.388 [INFO][3735] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.388 [INFO][3735] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.391 [INFO][3735] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.397 [INFO][3735] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.402 [INFO][3735] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.406 [INFO][3735] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.412 [INFO][3735] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.412 [INFO][3735] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.414 [INFO][3735] ipam.go 1682: Creating new handle: k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.420 [INFO][3735] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.432 [INFO][3735] ipam.go 1216: Successfully claimed IPs: [192.168.70.194/26] block=192.168.70.192/26 handle="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.432 [INFO][3735] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.194/26] handle="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.432 [INFO][3735] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:15.533602 env[1192]: 2024-02-09 08:56:15.432 [INFO][3735] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.194/26] IPv6=[] ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" HandleID="k8s-pod-network.52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.435 [INFO][3712] k8s.go 385: Populated endpoint ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bfa1b0d1-4987-4959-b29e-00a21e795aca", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"coredns-787d4945fb-nvkp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d1b3671538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.435 [INFO][3712] k8s.go 386: Calico CNI using IPs: [192.168.70.194/32] ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.435 [INFO][3712] dataplane_linux.go 68: Setting the host side veth name to cali1d1b3671538 ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.462 [INFO][3712] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.464 [INFO][3712] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bfa1b0d1-4987-4959-b29e-00a21e795aca", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c", Pod:"coredns-787d4945fb-nvkp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d1b3671538", MAC:"de:a2:fc:9e:20:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:15.534903 env[1192]: 2024-02-09 08:56:15.520 [INFO][3712] k8s.go 491: Wrote updated endpoint to datastore ContainerID="52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c" Namespace="kube-system" Pod="coredns-787d4945fb-nvkp8" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:15.536788 systemd-networkd[1059]: calid9a91e18c28: Gained IPv6LL Feb 9 08:56:15.600226 env[1192]: time="2024-02-09T08:56:15.600113290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:15.600226 env[1192]: time="2024-02-09T08:56:15.600156344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:15.600226 env[1192]: time="2024-02-09T08:56:15.600174702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:15.600587 env[1192]: time="2024-02-09T08:56:15.600325117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c pid=3773 runtime=io.containerd.runc.v2 Feb 9 08:56:15.600000 audit[3768]: NETFILTER_CFG table=filter:116 family=2 entries=40 op=nft_register_chain pid=3768 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:15.602942 kernel: kauditd_printk_skb: 117 callbacks suppressed Feb 9 08:56:15.603016 kernel: audit: type=1325 audit(1707468975.600:324): table=filter:116 family=2 entries=40 op=nft_register_chain pid=3768 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:15.600000 audit[3768]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffc457b5740 a2=0 a3=7ffc457b572c items=0 ppid=3358 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:15.609955 kernel: audit: type=1300 audit(1707468975.600:324): arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffc457b5740 a2=0 a3=7ffc457b572c items=0 ppid=3358 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:15.600000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:15.622624 kernel: audit: type=1327 audit(1707468975.600:324): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:15.708221 env[1192]: time="2024-02-09T08:56:15.706432975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nvkp8,Uid:bfa1b0d1-4987-4959-b29e-00a21e795aca,Namespace:kube-system,Attempt:1,} returns sandbox id \"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c\"" Feb 9 08:56:15.708400 kubelet[2165]: E0209 08:56:15.707246 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:15.709809 env[1192]: time="2024-02-09T08:56:15.709766152Z" level=info msg="CreateContainer within sandbox \"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 08:56:15.737351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4d1608a516c: link becomes ready Feb 9 08:56:15.735898 systemd-networkd[1059]: cali4d1608a516c: Link UP Feb 9 08:56:15.736224 systemd-networkd[1059]: cali4d1608a516c: Gained carrier Feb 9 08:56:15.768999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841378077.mount: Deactivated successfully. Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.335 [INFO][3722] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0 csi-node-driver- calico-system a86e9fee-b3a3-441d-8e06-482d03abae6a 749 0 2024-02-09 08:55:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b csi-node-driver-kk2hr eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali4d1608a516c [] []}} ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.335 [INFO][3722] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.476 [INFO][3740] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" HandleID="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.529 [INFO][3740] ipam_plugin.go 268: Auto assigning IP ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" HandleID="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ca90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"csi-node-driver-kk2hr", "timestamp":"2024-02-09 08:56:15.476550338 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.529 [INFO][3740] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.529 [INFO][3740] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.529 [INFO][3740] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.532 [INFO][3740] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.539 [INFO][3740] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.547 [INFO][3740] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.554 [INFO][3740] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.559 [INFO][3740] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.560 [INFO][3740] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.565 [INFO][3740] ipam.go 1682: Creating new handle: k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6 Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.575 [INFO][3740] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.709 [INFO][3740] ipam.go 1216: Successfully claimed IPs: [192.168.70.195/26] block=192.168.70.192/26 handle="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.709 [INFO][3740] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.195/26] handle="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.710 [INFO][3740] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:15.776176 env[1192]: 2024-02-09 08:56:15.710 [INFO][3740] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.195/26] IPv6=[] ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" HandleID="k8s-pod-network.c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.719 [INFO][3722] k8s.go 385: Populated endpoint ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a86e9fee-b3a3-441d-8e06-482d03abae6a", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"csi-node-driver-kk2hr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4d1608a516c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.719 [INFO][3722] k8s.go 386: Calico CNI using IPs: [192.168.70.195/32] ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.719 [INFO][3722] dataplane_linux.go 68: Setting the host side veth name to cali4d1608a516c ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.738 [INFO][3722] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.741 [INFO][3722] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a86e9fee-b3a3-441d-8e06-482d03abae6a", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6", Pod:"csi-node-driver-kk2hr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4d1608a516c", MAC:"32:01:6a:6c:b3:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:15.777342 env[1192]: 2024-02-09 08:56:15.762 [INFO][3722] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6" Namespace="calico-system" Pod="csi-node-driver-kk2hr" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:15.780109 env[1192]: time="2024-02-09T08:56:15.780043167Z" level=info msg="CreateContainer within sandbox \"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f1a95ec78335c27b7fa7c6e5da41fb3eae8072a13918485ab43de77d77cf540\"" Feb 9 08:56:15.782774 env[1192]: time="2024-02-09T08:56:15.782715076Z" level=info msg="StartContainer for \"7f1a95ec78335c27b7fa7c6e5da41fb3eae8072a13918485ab43de77d77cf540\"" Feb 9 08:56:15.830790 env[1192]: time="2024-02-09T08:56:15.830694610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:15.831099 env[1192]: time="2024-02-09T08:56:15.830739196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:15.831099 env[1192]: time="2024-02-09T08:56:15.830798359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:15.831099 env[1192]: time="2024-02-09T08:56:15.831020730Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6 pid=3848 runtime=io.containerd.runc.v2 Feb 9 08:56:15.848939 env[1192]: time="2024-02-09T08:56:15.848872000Z" level=info msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" Feb 9 08:56:15.850000 audit[3837]: NETFILTER_CFG table=filter:117 family=2 entries=38 op=nft_register_chain pid=3837 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:15.854598 kernel: audit: type=1325 audit(1707468975.850:325): table=filter:117 family=2 entries=38 op=nft_register_chain pid=3837 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:15.850000 audit[3837]: SYSCALL arch=c000003e syscall=46 success=yes exit=19508 a0=3 a1=7fffabf2acb0 a2=0 a3=7fffabf2ac9c items=0 ppid=3358 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:15.860726 kernel: audit: type=1300 audit(1707468975.850:325): arch=c000003e syscall=46 success=yes exit=19508 a0=3 a1=7fffabf2acb0 a2=0 a3=7fffabf2ac9c items=0 ppid=3358 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:15.850000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:15.864596 kernel: audit: type=1327 audit(1707468975.850:325): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:15.900589 env[1192]: time="2024-02-09T08:56:15.897416588Z" level=info msg="StartContainer for \"7f1a95ec78335c27b7fa7c6e5da41fb3eae8072a13918485ab43de77d77cf540\" returns successfully" Feb 9 08:56:16.003291 env[1192]: time="2024-02-09T08:56:16.003246566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kk2hr,Uid:a86e9fee-b3a3-441d-8e06-482d03abae6a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6\"" Feb 9 08:56:16.103168 kubelet[2165]: E0209 08:56:16.103135 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:16.183045 kubelet[2165]: I0209 08:56:16.183000 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-nvkp8" podStartSLOduration=38.182944574 pod.CreationTimestamp="2024-02-09 08:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:16.182062498 +0000 UTC m=+51.760617712" watchObservedRunningTime="2024-02-09 08:56:16.182944574 +0000 UTC m=+51.761499792" Feb 9 08:56:16.267000 audit[3954]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:16.267000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe4f84fdc0 a2=0 a3=7ffe4f84fdac items=0 ppid=2381 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:16.276210 kernel: audit: type=1325 audit(1707468976.267:326): table=filter:118 family=2 entries=12 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:16.276398 kernel: audit: type=1300 audit(1707468976.267:326): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe4f84fdc0 a2=0 a3=7ffe4f84fdac items=0 ppid=2381 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:16.276444 kernel: audit: type=1327 audit(1707468976.267:326): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:16.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:16.268000 audit[3954]: NETFILTER_CFG table=nat:119 family=2 entries=30 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:16.268000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffe4f84fdc0 a2=0 a3=7ffe4f84fdac items=0 ppid=2381 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:16.284972 kernel: audit: type=1325 audit(1707468976.268:327): table=nat:119 family=2 entries=30 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:16.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:16.370108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594519530.mount: Deactivated successfully. Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.186 [INFO][3896] k8s.go 578: Cleaning up netns ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.186 [INFO][3896] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" iface="eth0" netns="/var/run/netns/cni-1c889293-e482-159e-ef46-608d955d58b4" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.186 [INFO][3896] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" iface="eth0" netns="/var/run/netns/cni-1c889293-e482-159e-ef46-608d955d58b4" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.186 [INFO][3896] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" iface="eth0" netns="/var/run/netns/cni-1c889293-e482-159e-ef46-608d955d58b4" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.187 [INFO][3896] k8s.go 585: Releasing IP address(es) ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.187 [INFO][3896] utils.go 188: Calico CNI releasing IP address ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.225 [INFO][3923] ipam_plugin.go 415: Releasing address using handleID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.225 [INFO][3923] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.225 [INFO][3923] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.423 [WARNING][3923] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.423 [INFO][3923] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.426 [INFO][3923] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:16.439108 env[1192]: 2024-02-09 08:56:16.428 [INFO][3896] k8s.go 591: Teardown processing complete. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:16.439108 env[1192]: time="2024-02-09T08:56:16.435513131Z" level=info msg="TearDown network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" successfully" Feb 9 08:56:16.439108 env[1192]: time="2024-02-09T08:56:16.435587314Z" level=info msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" returns successfully" Feb 9 08:56:16.439108 env[1192]: time="2024-02-09T08:56:16.438057742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bzf2s,Uid:684469d9-8de5-4c0c-b081-0fa23de4f0b8,Namespace:kube-system,Attempt:1,}" Feb 9 08:56:16.434862 systemd[1]: run-netns-cni\x2d1c889293\x2de482\x2d159e\x2def46\x2d608d955d58b4.mount: Deactivated successfully. Feb 9 08:56:16.445787 kubelet[2165]: E0209 08:56:16.436132 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:16.689754 systemd-networkd[1059]: cali1d1b3671538: Gained IPv6LL Feb 9 08:56:16.787390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 08:56:16.787517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califc2942164a7: link becomes ready Feb 9 08:56:16.785787 systemd-networkd[1059]: califc2942164a7: Link UP Feb 9 08:56:16.788159 systemd-networkd[1059]: califc2942164a7: Gained carrier Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.516 [INFO][3956] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0 coredns-787d4945fb- kube-system 684469d9-8de5-4c0c-b081-0fa23de4f0b8 771 0 2024-02-09 08:55:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b coredns-787d4945fb-bzf2s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc2942164a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.516 [INFO][3956] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.727 [INFO][3968] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" HandleID="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.743 [INFO][3968] ipam_plugin.go 268: Auto assigning IP ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" HandleID="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000cea60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"coredns-787d4945fb-bzf2s", "timestamp":"2024-02-09 08:56:16.727321781 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.743 [INFO][3968] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.743 [INFO][3968] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.743 [INFO][3968] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.746 [INFO][3968] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.753 [INFO][3968] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.758 [INFO][3968] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.761 [INFO][3968] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.763 [INFO][3968] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.763 [INFO][3968] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.765 [INFO][3968] ipam.go 1682: Creating new handle: k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30 Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.769 [INFO][3968] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.774 [INFO][3968] ipam.go 1216: Successfully claimed IPs: [192.168.70.196/26] block=192.168.70.192/26 handle="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.774 [INFO][3968] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.196/26] handle="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.775 [INFO][3968] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:16.814984 env[1192]: 2024-02-09 08:56:16.775 [INFO][3968] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.196/26] IPv6=[] ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" HandleID="k8s-pod-network.4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.776 [INFO][3956] k8s.go 385: Populated endpoint ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"684469d9-8de5-4c0c-b081-0fa23de4f0b8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"coredns-787d4945fb-bzf2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2942164a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.777 [INFO][3956] k8s.go 386: Calico CNI using IPs: [192.168.70.196/32] ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.777 [INFO][3956] dataplane_linux.go 68: Setting the host side veth name to califc2942164a7 ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.794 [INFO][3956] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.794 [INFO][3956] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"684469d9-8de5-4c0c-b081-0fa23de4f0b8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30", Pod:"coredns-787d4945fb-bzf2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2942164a7", MAC:"c6:e8:66:e6:ee:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:16.815760 env[1192]: 2024-02-09 08:56:16.811 [INFO][3956] k8s.go 491: Wrote updated endpoint to datastore ContainerID="4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30" Namespace="kube-system" Pod="coredns-787d4945fb-bzf2s" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:16.852000 audit[3991]: NETFILTER_CFG table=filter:120 family=2 entries=38 op=nft_register_chain pid=3991 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:16.852000 audit[3991]: SYSCALL arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7ffc5ac6e1f0 a2=0 a3=7ffc5ac6e1dc items=0 ppid=3358 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:16.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:16.863803 env[1192]: time="2024-02-09T08:56:16.863470132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:16.864312 env[1192]: time="2024-02-09T08:56:16.864275134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:16.864440 env[1192]: time="2024-02-09T08:56:16.864415636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:16.864722 env[1192]: time="2024-02-09T08:56:16.864691392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30 pid=3994 runtime=io.containerd.runc.v2 Feb 9 08:56:16.994106 env[1192]: time="2024-02-09T08:56:16.993081751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bzf2s,Uid:684469d9-8de5-4c0c-b081-0fa23de4f0b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30\"" Feb 9 08:56:16.995626 kubelet[2165]: E0209 08:56:16.995350 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:17.001347 env[1192]: time="2024-02-09T08:56:17.001291604Z" level=info msg="CreateContainer within sandbox \"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 08:56:17.019893 env[1192]: time="2024-02-09T08:56:17.019842638Z" level=info msg="CreateContainer within sandbox \"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd8121bde13f70eb5c43c55c18baa40ae2c7fad42ed432dddfc33dc3abd6c9e5\"" Feb 9 08:56:17.021467 env[1192]: time="2024-02-09T08:56:17.021412303Z" level=info msg="StartContainer for \"dd8121bde13f70eb5c43c55c18baa40ae2c7fad42ed432dddfc33dc3abd6c9e5\"" Feb 9 08:56:17.107040 kubelet[2165]: E0209 08:56:17.106663 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:17.136794 systemd-networkd[1059]: cali4d1608a516c: Gained IPv6LL Feb 9 08:56:17.182021 env[1192]: time="2024-02-09T08:56:17.181952547Z" level=info msg="StartContainer for \"dd8121bde13f70eb5c43c55c18baa40ae2c7fad42ed432dddfc33dc3abd6c9e5\" returns successfully" Feb 9 08:56:17.274000 audit[4087]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:17.274000 audit[4087]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffde6134c60 a2=0 a3=7ffde6134c4c items=0 ppid=2381 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:17.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:17.278000 audit[4087]: NETFILTER_CFG table=nat:122 family=2 entries=51 op=nft_register_chain pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:17.278000 audit[4087]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffde6134c60 a2=0 a3=7ffde6134c4c items=0 ppid=2381 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:17.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:18.110984 kubelet[2165]: E0209 08:56:18.110169 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:18.116937 kubelet[2165]: E0209 08:56:18.116838 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:18.800781 systemd-networkd[1059]: califc2942164a7: Gained IPv6LL Feb 9 08:56:18.859000 audit[4120]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:18.859000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdc77f2470 a2=0 a3=7ffdc77f245c items=0 ppid=2381 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:18.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:18.862000 audit[4120]: NETFILTER_CFG table=nat:124 family=2 entries=60 op=nft_register_rule pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:18.862000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffdc77f2470 a2=0 a3=7ffdc77f245c items=0 ppid=2381 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:18.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:18.925019 env[1192]: time="2024-02-09T08:56:18.924964787Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:18.930619 env[1192]: time="2024-02-09T08:56:18.930531921Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:18.933834 env[1192]: time="2024-02-09T08:56:18.933791988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:18.949180 env[1192]: time="2024-02-09T08:56:18.949085057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:18.949926 env[1192]: time="2024-02-09T08:56:18.949885406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 08:56:18.958072 env[1192]: time="2024-02-09T08:56:18.957984419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 08:56:18.977589 env[1192]: time="2024-02-09T08:56:18.977532734Z" level=info msg="CreateContainer within sandbox \"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 08:56:19.003955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394612648.mount: Deactivated successfully. Feb 9 08:56:19.005180 env[1192]: time="2024-02-09T08:56:19.005138685Z" level=info msg="CreateContainer within sandbox \"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d\"" Feb 9 08:56:19.006190 env[1192]: time="2024-02-09T08:56:19.006145062Z" level=info msg="StartContainer for \"77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d\"" Feb 9 08:56:19.115939 kubelet[2165]: E0209 08:56:19.115837 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:19.122171 kubelet[2165]: E0209 08:56:19.117275 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:19.139052 env[1192]: time="2024-02-09T08:56:19.138993469Z" level=info msg="StartContainer for \"77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d\" returns successfully" Feb 9 08:56:19.141147 kubelet[2165]: I0209 08:56:19.140331 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bzf2s" podStartSLOduration=41.140277994 pod.CreationTimestamp="2024-02-09 08:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:18.125678173 +0000 UTC m=+53.704233393" watchObservedRunningTime="2024-02-09 08:56:19.140277994 +0000 UTC m=+54.718833213" Feb 9 08:56:19.237000 audit[4180]: NETFILTER_CFG table=filter:125 family=2 entries=6 op=nft_register_rule pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:19.237000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffb0321910 a2=0 a3=7fffb03218fc items=0 ppid=2381 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:19.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:19.254000 audit[4180]: NETFILTER_CFG table=nat:126 family=2 entries=72 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:19.254000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fffb0321910 a2=0 a3=7fffb03218fc items=0 ppid=2381 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:19.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:20.133050 kubelet[2165]: E0209 08:56:20.132952 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:20.181392 systemd[1]: run-containerd-runc-k8s.io-77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d-runc.fCrGaQ.mount: Deactivated successfully. Feb 9 08:56:20.246605 kubelet[2165]: I0209 08:56:20.243451 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ffc8f9f79-lg8qn" podStartSLOduration=-9.22337200061137e+09 pod.CreationTimestamp="2024-02-09 08:55:44 +0000 UTC" firstStartedPulling="2024-02-09 08:56:14.521616299 +0000 UTC m=+50.100171498" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:20.158995442 +0000 UTC m=+55.737550675" watchObservedRunningTime="2024-02-09 08:56:20.243405604 +0000 UTC m=+55.821960823" Feb 9 08:56:20.972852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326827277.mount: Deactivated successfully. Feb 9 08:56:21.135150 kubelet[2165]: E0209 08:56:21.135051 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:21.799757 env[1192]: time="2024-02-09T08:56:21.799708230Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:21.803640 env[1192]: time="2024-02-09T08:56:21.803584668Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:21.806930 env[1192]: time="2024-02-09T08:56:21.806873698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:21.809952 env[1192]: time="2024-02-09T08:56:21.809886496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:21.810731 env[1192]: time="2024-02-09T08:56:21.810690602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 08:56:21.815314 env[1192]: time="2024-02-09T08:56:21.815259472Z" level=info msg="CreateContainer within sandbox \"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 08:56:21.885769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314248801.mount: Deactivated successfully. Feb 9 08:56:21.896330 env[1192]: time="2024-02-09T08:56:21.896258808Z" level=info msg="CreateContainer within sandbox \"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2050bf4b7099ac17c411de368afd3ce094abb7485c34ff504476e5218a471783\"" Feb 9 08:56:21.903176 env[1192]: time="2024-02-09T08:56:21.897343661Z" level=info msg="StartContainer for \"2050bf4b7099ac17c411de368afd3ce094abb7485c34ff504476e5218a471783\"" Feb 9 08:56:21.984652 env[1192]: time="2024-02-09T08:56:21.984537125Z" level=info msg="StartContainer for \"2050bf4b7099ac17c411de368afd3ce094abb7485c34ff504476e5218a471783\" returns successfully" Feb 9 08:56:21.988803 env[1192]: time="2024-02-09T08:56:21.988673054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 08:56:24.435207 env[1192]: time="2024-02-09T08:56:24.435116641Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:24.437682 env[1192]: time="2024-02-09T08:56:24.437628805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:24.439593 env[1192]: time="2024-02-09T08:56:24.439534608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:24.442025 env[1192]: time="2024-02-09T08:56:24.441994318Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:24.442520 env[1192]: time="2024-02-09T08:56:24.442475079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 08:56:24.447300 env[1192]: time="2024-02-09T08:56:24.447252385Z" level=info msg="CreateContainer within sandbox \"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 08:56:24.464436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499180664.mount: Deactivated successfully. Feb 9 08:56:24.475623 env[1192]: time="2024-02-09T08:56:24.475533265Z" level=info msg="CreateContainer within sandbox \"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e32dc17dda4f97fd4dab6fc0562f463b2c6ca7f14f93f92c427edd496376d1cf\"" Feb 9 08:56:24.478847 env[1192]: time="2024-02-09T08:56:24.478810298Z" level=info msg="StartContainer for \"e32dc17dda4f97fd4dab6fc0562f463b2c6ca7f14f93f92c427edd496376d1cf\"" Feb 9 08:56:24.587369 env[1192]: time="2024-02-09T08:56:24.585807164Z" level=info msg="StartContainer for \"e32dc17dda4f97fd4dab6fc0562f463b2c6ca7f14f93f92c427edd496376d1cf\" returns successfully" Feb 9 08:56:24.613663 env[1192]: time="2024-02-09T08:56:24.613613978Z" level=info msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" Feb 9 08:56:24.631607 kubelet[2165]: E0209 08:56:24.631430 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.678 [WARNING][4318] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bfa1b0d1-4987-4959-b29e-00a21e795aca", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c", Pod:"coredns-787d4945fb-nvkp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d1b3671538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.679 [INFO][4318] k8s.go 578: Cleaning up netns ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.679 [INFO][4318] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" iface="eth0" netns="" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.679 [INFO][4318] k8s.go 585: Releasing IP address(es) ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.679 [INFO][4318] utils.go 188: Calico CNI releasing IP address ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.703 [INFO][4324] ipam_plugin.go 415: Releasing address using handleID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.704 [INFO][4324] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.704 [INFO][4324] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.712 [WARNING][4324] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.712 [INFO][4324] ipam_plugin.go 443: Releasing address using workloadID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.714 [INFO][4324] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:24.718514 env[1192]: 2024-02-09 08:56:24.716 [INFO][4318] k8s.go 591: Teardown processing complete. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.718514 env[1192]: time="2024-02-09T08:56:24.718210431Z" level=info msg="TearDown network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" successfully" Feb 9 08:56:24.718514 env[1192]: time="2024-02-09T08:56:24.718256521Z" level=info msg="StopPodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" returns successfully" Feb 9 08:56:24.719305 env[1192]: time="2024-02-09T08:56:24.719266308Z" level=info msg="RemovePodSandbox for \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" Feb 9 08:56:24.719351 env[1192]: time="2024-02-09T08:56:24.719308969Z" level=info msg="Forcibly stopping sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\"" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.768 [WARNING][4344] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bfa1b0d1-4987-4959-b29e-00a21e795aca", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"52bf0d683d537d43dd319e52a823f1c75c7b2cc450d944b1663a27efc1f4a95c", Pod:"coredns-787d4945fb-nvkp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d1b3671538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.768 [INFO][4344] k8s.go 578: Cleaning up netns ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.768 [INFO][4344] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" iface="eth0" netns="" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.768 [INFO][4344] k8s.go 585: Releasing IP address(es) ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.768 [INFO][4344] utils.go 188: Calico CNI releasing IP address ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.792 [INFO][4351] ipam_plugin.go 415: Releasing address using handleID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.792 [INFO][4351] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.792 [INFO][4351] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.802 [WARNING][4351] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.802 [INFO][4351] ipam_plugin.go 443: Releasing address using workloadID ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" HandleID="k8s-pod-network.238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--nvkp8-eth0" Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.804 [INFO][4351] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:24.807940 env[1192]: 2024-02-09 08:56:24.806 [INFO][4344] k8s.go 591: Teardown processing complete. ContainerID="238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf" Feb 9 08:56:24.809254 env[1192]: time="2024-02-09T08:56:24.807992822Z" level=info msg="TearDown network for sandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" successfully" Feb 9 08:56:24.813246 env[1192]: time="2024-02-09T08:56:24.813197457Z" level=info msg="RemovePodSandbox \"238dc2423417acd1114ff3910e6dc889acc3abeccf00b74ca242f943d2dad8bf\" returns successfully" Feb 9 08:56:24.814143 env[1192]: time="2024-02-09T08:56:24.814097439Z" level=info msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.858 [WARNING][4369] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"684469d9-8de5-4c0c-b081-0fa23de4f0b8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30", Pod:"coredns-787d4945fb-bzf2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2942164a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.858 [INFO][4369] k8s.go 578: Cleaning up netns ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.859 [INFO][4369] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" iface="eth0" netns="" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.859 [INFO][4369] k8s.go 585: Releasing IP address(es) ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.859 [INFO][4369] utils.go 188: Calico CNI releasing IP address ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.899 [INFO][4375] ipam_plugin.go 415: Releasing address using handleID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.899 [INFO][4375] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.899 [INFO][4375] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.907 [WARNING][4375] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.908 [INFO][4375] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.910 [INFO][4375] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:24.914079 env[1192]: 2024-02-09 08:56:24.912 [INFO][4369] k8s.go 591: Teardown processing complete. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:24.914780 env[1192]: time="2024-02-09T08:56:24.914732230Z" level=info msg="TearDown network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" successfully" Feb 9 08:56:24.914871 env[1192]: time="2024-02-09T08:56:24.914853998Z" level=info msg="StopPodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" returns successfully" Feb 9 08:56:24.915519 env[1192]: time="2024-02-09T08:56:24.915485035Z" level=info msg="RemovePodSandbox for \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" Feb 9 08:56:24.915625 env[1192]: time="2024-02-09T08:56:24.915528403Z" level=info msg="Forcibly stopping sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\"" Feb 9 08:56:24.962513 kubelet[2165]: I0209 08:56:24.962470 2165 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 08:56:24.962781 kubelet[2165]: I0209 08:56:24.962758 2165 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.000 [WARNING][4394] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"684469d9-8de5-4c0c-b081-0fa23de4f0b8", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"4e8641a85f73e85cea1c6fcae183f0e668a27b6bce46ca2675e0f438bcf45e30", Pod:"coredns-787d4945fb-bzf2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc2942164a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.001 [INFO][4394] k8s.go 578: Cleaning up netns ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.001 [INFO][4394] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" iface="eth0" netns="" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.001 [INFO][4394] k8s.go 585: Releasing IP address(es) ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.001 [INFO][4394] utils.go 188: Calico CNI releasing IP address ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.024 [INFO][4401] ipam_plugin.go 415: Releasing address using handleID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.024 [INFO][4401] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.024 [INFO][4401] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.033 [WARNING][4401] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.033 [INFO][4401] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" HandleID="k8s-pod-network.5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Workload="ci--3510.3.2--6--9c47918d0b-k8s-coredns--787d4945fb--bzf2s-eth0" Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.035 [INFO][4401] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:25.039372 env[1192]: 2024-02-09 08:56:25.037 [INFO][4394] k8s.go 591: Teardown processing complete. ContainerID="5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce" Feb 9 08:56:25.039987 env[1192]: time="2024-02-09T08:56:25.039410159Z" level=info msg="TearDown network for sandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" successfully" Feb 9 08:56:25.043616 env[1192]: time="2024-02-09T08:56:25.043522273Z" level=info msg="RemovePodSandbox \"5c9892aba353af8f57392d2819e61bc7a4a73836a6f510d5801fec76d09c22ce\" returns successfully" Feb 9 08:56:25.044134 env[1192]: time="2024-02-09T08:56:25.044098871Z" level=info msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.085 [WARNING][4421] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0", GenerateName:"calico-kube-controllers-7ffc8f9f79-", Namespace:"calico-system", SelfLink:"", UID:"1409cbc3-f199-4f73-86e3-3a904676c00d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ffc8f9f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b", Pod:"calico-kube-controllers-7ffc8f9f79-lg8qn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9a91e18c28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.085 [INFO][4421] k8s.go 578: Cleaning up netns ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.085 [INFO][4421] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" iface="eth0" netns="" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.085 [INFO][4421] k8s.go 585: Releasing IP address(es) ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.085 [INFO][4421] utils.go 188: Calico CNI releasing IP address ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.110 [INFO][4427] ipam_plugin.go 415: Releasing address using handleID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.111 [INFO][4427] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.111 [INFO][4427] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.120 [WARNING][4427] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.120 [INFO][4427] ipam_plugin.go 443: Releasing address using workloadID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.122 [INFO][4427] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:25.126295 env[1192]: 2024-02-09 08:56:25.124 [INFO][4421] k8s.go 591: Teardown processing complete. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.127411 env[1192]: time="2024-02-09T08:56:25.126837141Z" level=info msg="TearDown network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" successfully" Feb 9 08:56:25.127411 env[1192]: time="2024-02-09T08:56:25.126898704Z" level=info msg="StopPodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" returns successfully" Feb 9 08:56:25.129438 env[1192]: time="2024-02-09T08:56:25.129397232Z" level=info msg="RemovePodSandbox for \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" Feb 9 08:56:25.129519 env[1192]: time="2024-02-09T08:56:25.129451626Z" level=info msg="Forcibly stopping sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\"" Feb 9 08:56:25.188469 kubelet[2165]: I0209 08:56:25.188085 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kk2hr" podStartSLOduration=-9.223371995666752e+09 pod.CreationTimestamp="2024-02-09 08:55:44 +0000 UTC" firstStartedPulling="2024-02-09 08:56:16.005122595 +0000 UTC m=+51.583677792" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:25.187093494 +0000 UTC m=+60.765648713" watchObservedRunningTime="2024-02-09 08:56:25.188024458 +0000 UTC m=+60.766579686" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.205 [WARNING][4447] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0", GenerateName:"calico-kube-controllers-7ffc8f9f79-", Namespace:"calico-system", SelfLink:"", UID:"1409cbc3-f199-4f73-86e3-3a904676c00d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ffc8f9f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c8983346e4628fc632141708123035c69a20ce38c5e506da72cd3b5a9bdf728b", Pod:"calico-kube-controllers-7ffc8f9f79-lg8qn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9a91e18c28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.205 [INFO][4447] k8s.go 578: Cleaning up netns ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.205 [INFO][4447] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" iface="eth0" netns="" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.205 [INFO][4447] k8s.go 585: Releasing IP address(es) ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.205 [INFO][4447] utils.go 188: Calico CNI releasing IP address ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.231 [INFO][4453] ipam_plugin.go 415: Releasing address using handleID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.231 [INFO][4453] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.231 [INFO][4453] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.239 [WARNING][4453] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.239 [INFO][4453] ipam_plugin.go 443: Releasing address using workloadID ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" HandleID="k8s-pod-network.66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--kube--controllers--7ffc8f9f79--lg8qn-eth0" Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.242 [INFO][4453] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:25.245937 env[1192]: 2024-02-09 08:56:25.244 [INFO][4447] k8s.go 591: Teardown processing complete. ContainerID="66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046" Feb 9 08:56:25.246650 env[1192]: time="2024-02-09T08:56:25.246612925Z" level=info msg="TearDown network for sandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" successfully" Feb 9 08:56:25.251085 env[1192]: time="2024-02-09T08:56:25.251042113Z" level=info msg="RemovePodSandbox \"66395214d5c588a5c3344693297e32785f49741517a5c9df36bade6618c35046\" returns successfully" Feb 9 08:56:25.251839 env[1192]: time="2024-02-09T08:56:25.251797651Z" level=info msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.292 [WARNING][4471] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a86e9fee-b3a3-441d-8e06-482d03abae6a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6", Pod:"csi-node-driver-kk2hr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4d1608a516c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.292 [INFO][4471] k8s.go 578: Cleaning up netns ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.292 [INFO][4471] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" iface="eth0" netns="" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.292 [INFO][4471] k8s.go 585: Releasing IP address(es) ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.292 [INFO][4471] utils.go 188: Calico CNI releasing IP address ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.316 [INFO][4477] ipam_plugin.go 415: Releasing address using handleID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.318 [INFO][4477] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.318 [INFO][4477] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.326 [WARNING][4477] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.326 [INFO][4477] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.329 [INFO][4477] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:25.333630 env[1192]: 2024-02-09 08:56:25.330 [INFO][4471] k8s.go 591: Teardown processing complete. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.333630 env[1192]: time="2024-02-09T08:56:25.332658608Z" level=info msg="TearDown network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" successfully" Feb 9 08:56:25.333630 env[1192]: time="2024-02-09T08:56:25.332693683Z" level=info msg="StopPodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" returns successfully" Feb 9 08:56:25.334847 env[1192]: time="2024-02-09T08:56:25.334674366Z" level=info msg="RemovePodSandbox for \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" Feb 9 08:56:25.334847 env[1192]: time="2024-02-09T08:56:25.334714201Z" level=info msg="Forcibly stopping sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\"" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.376 [WARNING][4495] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a86e9fee-b3a3-441d-8e06-482d03abae6a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 55, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"c98d0563515c20b6d47cecda7f5f1e9cf99fd396bcb031833710f75b771351e6", Pod:"csi-node-driver-kk2hr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4d1608a516c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.376 [INFO][4495] k8s.go 578: Cleaning up netns ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.376 [INFO][4495] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" iface="eth0" netns="" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.376 [INFO][4495] k8s.go 585: Releasing IP address(es) ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.376 [INFO][4495] utils.go 188: Calico CNI releasing IP address ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.399 [INFO][4501] ipam_plugin.go 415: Releasing address using handleID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.399 [INFO][4501] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.399 [INFO][4501] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.410 [WARNING][4501] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.410 [INFO][4501] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" HandleID="k8s-pod-network.a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Workload="ci--3510.3.2--6--9c47918d0b-k8s-csi--node--driver--kk2hr-eth0" Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.412 [INFO][4501] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:25.416240 env[1192]: 2024-02-09 08:56:25.414 [INFO][4495] k8s.go 591: Teardown processing complete. ContainerID="a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd" Feb 9 08:56:25.417344 env[1192]: time="2024-02-09T08:56:25.416292235Z" level=info msg="TearDown network for sandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" successfully" Feb 9 08:56:25.420096 env[1192]: time="2024-02-09T08:56:25.420042118Z" level=info msg="RemovePodSandbox \"a26e068b3f2a633f38748b224c5ea773210b2e7bb49cd8cfbcf91d300fd8a0cd\" returns successfully" Feb 9 08:56:25.461326 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.zk4J3x.mount: Deactivated successfully. Feb 9 08:56:29.152246 kubelet[2165]: I0209 08:56:29.152183 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:56:29.153996 kubelet[2165]: I0209 08:56:29.153956 2165 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:56:29.227376 kubelet[2165]: I0209 08:56:29.227286 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea781d9b-0918-454e-b1d8-0d108f845653-calico-apiserver-certs\") pod \"calico-apiserver-548fbb7d57-5t5vs\" (UID: \"ea781d9b-0918-454e-b1d8-0d108f845653\") " pod="calico-apiserver/calico-apiserver-548fbb7d57-5t5vs" Feb 9 08:56:29.228582 kubelet[2165]: I0209 08:56:29.228545 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7dt2\" (UniqueName: \"kubernetes.io/projected/6ea9ca54-7f7e-4033-94ea-8bce133fb258-kube-api-access-l7dt2\") pod \"calico-apiserver-548fbb7d57-kg6ct\" (UID: \"6ea9ca54-7f7e-4033-94ea-8bce133fb258\") " pod="calico-apiserver/calico-apiserver-548fbb7d57-kg6ct" Feb 9 08:56:29.228808 kubelet[2165]: I0209 08:56:29.228784 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6ea9ca54-7f7e-4033-94ea-8bce133fb258-calico-apiserver-certs\") pod \"calico-apiserver-548fbb7d57-kg6ct\" (UID: \"6ea9ca54-7f7e-4033-94ea-8bce133fb258\") " pod="calico-apiserver/calico-apiserver-548fbb7d57-kg6ct" Feb 9 08:56:29.228956 kubelet[2165]: I0209 08:56:29.228945 2165 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mgv\" (UniqueName: \"kubernetes.io/projected/ea781d9b-0918-454e-b1d8-0d108f845653-kube-api-access-x7mgv\") pod \"calico-apiserver-548fbb7d57-5t5vs\" (UID: \"ea781d9b-0918-454e-b1d8-0d108f845653\") " pod="calico-apiserver/calico-apiserver-548fbb7d57-5t5vs" Feb 9 08:56:29.268618 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 08:56:29.268788 kernel: audit: type=1325 audit(1707468989.263:335): table=filter:127 family=2 entries=7 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.268821 kernel: audit: type=1300 audit(1707468989.263:335): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffed46e4e40 a2=0 a3=7ffed46e4e2c items=0 ppid=2381 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.263000 audit[4534]: NETFILTER_CFG table=filter:127 family=2 entries=7 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.263000 audit[4534]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffed46e4e40 a2=0 a3=7ffed46e4e2c items=0 ppid=2381 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.274730 kernel: audit: type=1327 audit(1707468989.263:335): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.265000 audit[4534]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.279611 kernel: audit: type=1325 audit(1707468989.265:336): table=nat:128 family=2 entries=78 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.265000 audit[4534]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffed46e4e40 a2=0 a3=7ffed46e4e2c items=0 ppid=2381 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.285603 kernel: audit: type=1300 audit(1707468989.265:336): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffed46e4e40 a2=0 a3=7ffed46e4e2c items=0 ppid=2381 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.292585 kernel: audit: type=1327 audit(1707468989.265:336): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.412000 audit[4564]: NETFILTER_CFG table=filter:129 family=2 entries=8 op=nft_register_rule pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.412000 audit[4564]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff145a59e0 a2=0 a3=7fff145a59cc items=0 ppid=2381 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.421015 kernel: audit: type=1325 audit(1707468989.412:337): table=filter:129 family=2 entries=8 op=nft_register_rule pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.421151 kernel: audit: type=1300 audit(1707468989.412:337): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff145a59e0 a2=0 a3=7fff145a59cc items=0 ppid=2381 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.423721 kernel: audit: type=1327 audit(1707468989.412:337): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.414000 audit[4564]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.428675 kernel: audit: type=1325 audit(1707468989.414:338): table=nat:130 family=2 entries=78 op=nft_register_rule pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:29.414000 audit[4564]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff145a59e0 a2=0 a3=7fff145a59cc items=0 ppid=2381 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:29.461470 env[1192]: time="2024-02-09T08:56:29.461055259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548fbb7d57-5t5vs,Uid:ea781d9b-0918-454e-b1d8-0d108f845653,Namespace:calico-apiserver,Attempt:0,}" Feb 9 08:56:29.462490 env[1192]: time="2024-02-09T08:56:29.462440658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548fbb7d57-kg6ct,Uid:6ea9ca54-7f7e-4033-94ea-8bce133fb258,Namespace:calico-apiserver,Attempt:0,}" Feb 9 08:56:29.702448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 08:56:29.702844 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali36bb0ced134: link becomes ready Feb 9 08:56:29.698184 systemd-networkd[1059]: cali36bb0ced134: Link UP Feb 9 08:56:29.702361 systemd-networkd[1059]: cali36bb0ced134: Gained carrier Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.561 [INFO][4573] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0 calico-apiserver-548fbb7d57- calico-apiserver 6ea9ca54-7f7e-4033-94ea-8bce133fb258 902 0 2024-02-09 08:56:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548fbb7d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b calico-apiserver-548fbb7d57-kg6ct eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36bb0ced134 [] []}} ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.561 [INFO][4573] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.624 [INFO][4588] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" HandleID="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.645 [INFO][4588] ipam_plugin.go 268: Auto assigning IP ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" HandleID="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000501b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"calico-apiserver-548fbb7d57-kg6ct", "timestamp":"2024-02-09 08:56:29.624552673 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.645 [INFO][4588] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.645 [INFO][4588] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.645 [INFO][4588] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.648 [INFO][4588] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.656 [INFO][4588] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.661 [INFO][4588] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.664 [INFO][4588] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.667 [INFO][4588] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.668 [INFO][4588] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.670 [INFO][4588] ipam.go 1682: Creating new handle: k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9 Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.676 [INFO][4588] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.685 [INFO][4588] ipam.go 1216: Successfully claimed IPs: [192.168.70.197/26] block=192.168.70.192/26 handle="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.685 [INFO][4588] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.197/26] handle="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.685 [INFO][4588] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:29.730024 env[1192]: 2024-02-09 08:56:29.685 [INFO][4588] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.197/26] IPv6=[] ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" HandleID="k8s-pod-network.563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.690 [INFO][4573] k8s.go 385: Populated endpoint ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0", GenerateName:"calico-apiserver-548fbb7d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ea9ca54-7f7e-4033-94ea-8bce133fb258", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548fbb7d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"calico-apiserver-548fbb7d57-kg6ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36bb0ced134", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.690 [INFO][4573] k8s.go 386: Calico CNI using IPs: [192.168.70.197/32] ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.690 [INFO][4573] dataplane_linux.go 68: Setting the host side veth name to cali36bb0ced134 ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.703 [INFO][4573] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.707 [INFO][4573] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0", GenerateName:"calico-apiserver-548fbb7d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ea9ca54-7f7e-4033-94ea-8bce133fb258", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548fbb7d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9", Pod:"calico-apiserver-548fbb7d57-kg6ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36bb0ced134", MAC:"56:a0:ab:63:ed:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:29.731370 env[1192]: 2024-02-09 08:56:29.727 [INFO][4573] k8s.go 491: Wrote updated endpoint to datastore ContainerID="563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-kg6ct" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--kg6ct-eth0" Feb 9 08:56:29.784452 systemd-networkd[1059]: calif0fb581ef17: Link UP Feb 9 08:56:29.789657 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif0fb581ef17: link becomes ready Feb 9 08:56:29.791018 systemd-networkd[1059]: calif0fb581ef17: Gained carrier Feb 9 08:56:29.792000 env[1192]: time="2024-02-09T08:56:29.786179999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:29.792000 env[1192]: time="2024-02-09T08:56:29.786242427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:29.792000 env[1192]: time="2024-02-09T08:56:29.786593407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:29.792000 env[1192]: time="2024-02-09T08:56:29.786821194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9 pid=4621 runtime=io.containerd.runc.v2 Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.568 [INFO][4566] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0 calico-apiserver-548fbb7d57- calico-apiserver ea781d9b-0918-454e-b1d8-0d108f845653 903 0 2024-02-09 08:56:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548fbb7d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-6-9c47918d0b calico-apiserver-548fbb7d57-5t5vs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0fb581ef17 [] []}} ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.568 [INFO][4566] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.626 [INFO][4589] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" HandleID="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.645 [INFO][4589] ipam_plugin.go 268: Auto assigning IP ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" HandleID="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-6-9c47918d0b", "pod":"calico-apiserver-548fbb7d57-5t5vs", "timestamp":"2024-02-09 08:56:29.626230801 +0000 UTC"}, Hostname:"ci-3510.3.2-6-9c47918d0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.645 [INFO][4589] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.685 [INFO][4589] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.685 [INFO][4589] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-6-9c47918d0b' Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.688 [INFO][4589] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.704 [INFO][4589] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.732 [INFO][4589] ipam.go 489: Trying affinity for 192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.736 [INFO][4589] ipam.go 155: Attempting to load block cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.740 [INFO][4589] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.192/26 host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.741 [INFO][4589] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.192/26 handle="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.743 [INFO][4589] ipam.go 1682: Creating new handle: k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2 Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.752 [INFO][4589] ipam.go 1203: Writing block in order to claim IPs block=192.168.70.192/26 handle="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.766 [INFO][4589] ipam.go 1216: Successfully claimed IPs: [192.168.70.198/26] block=192.168.70.192/26 handle="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.766 [INFO][4589] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.198/26] handle="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" host="ci-3510.3.2-6-9c47918d0b" Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.766 [INFO][4589] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 08:56:29.816691 env[1192]: 2024-02-09 08:56:29.766 [INFO][4589] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.70.198/26] IPv6=[] ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" HandleID="k8s-pod-network.5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Workload="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.771 [INFO][4566] k8s.go 385: Populated endpoint ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0", GenerateName:"calico-apiserver-548fbb7d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea781d9b-0918-454e-b1d8-0d108f845653", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548fbb7d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"", Pod:"calico-apiserver-548fbb7d57-5t5vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0fb581ef17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.772 [INFO][4566] k8s.go 386: Calico CNI using IPs: [192.168.70.198/32] ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.772 [INFO][4566] dataplane_linux.go 68: Setting the host side veth name to calif0fb581ef17 ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.789 [INFO][4566] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.794 [INFO][4566] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0", GenerateName:"calico-apiserver-548fbb7d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea781d9b-0918-454e-b1d8-0d108f845653", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 8, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548fbb7d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-6-9c47918d0b", ContainerID:"5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2", Pod:"calico-apiserver-548fbb7d57-5t5vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0fb581ef17", MAC:"ae:ea:4d:c0:cf:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 08:56:29.818729 env[1192]: 2024-02-09 08:56:29.810 [INFO][4566] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2" Namespace="calico-apiserver" Pod="calico-apiserver-548fbb7d57-5t5vs" WorkloadEndpoint="ci--3510.3.2--6--9c47918d0b-k8s-calico--apiserver--548fbb7d57--5t5vs-eth0" Feb 9 08:56:29.819000 audit[4644]: NETFILTER_CFG table=filter:131 family=2 entries=59 op=nft_register_chain pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:29.819000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7fff6a88bb00 a2=0 a3=7fff6a88baec items=0 ppid=3358 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.819000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:29.859342 env[1192]: time="2024-02-09T08:56:29.859237616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:56:29.859342 env[1192]: time="2024-02-09T08:56:29.859302033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:56:29.859582 env[1192]: time="2024-02-09T08:56:29.859314687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:56:29.859752 env[1192]: time="2024-02-09T08:56:29.859641756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2 pid=4667 runtime=io.containerd.runc.v2 Feb 9 08:56:29.913000 audit[4693]: NETFILTER_CFG table=filter:132 family=2 entries=56 op=nft_register_chain pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 08:56:29.913000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=27348 a0=3 a1=7fffd2304c90 a2=0 a3=7fffd2304c7c items=0 ppid=3358 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:29.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 08:56:29.935480 env[1192]: time="2024-02-09T08:56:29.935440098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548fbb7d57-kg6ct,Uid:6ea9ca54-7f7e-4033-94ea-8bce133fb258,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9\"" Feb 9 08:56:29.944201 env[1192]: time="2024-02-09T08:56:29.944121469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 08:56:29.987306 env[1192]: time="2024-02-09T08:56:29.987251414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548fbb7d57-5t5vs,Uid:ea781d9b-0918-454e-b1d8-0d108f845653,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2\"" Feb 9 08:56:31.473086 systemd-networkd[1059]: cali36bb0ced134: Gained IPv6LL Feb 9 08:56:31.536857 systemd-networkd[1059]: calif0fb581ef17: Gained IPv6LL Feb 9 08:56:33.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-143.198.159.117:22-139.178.89.65:39208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:33.990943 systemd[1]: Started sshd@7-143.198.159.117:22-139.178.89.65:39208.service. Feb 9 08:56:34.106000 audit[4743]: USER_ACCT pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.107575 sshd[4743]: Accepted publickey for core from 139.178.89.65 port 39208 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:34.110000 audit[4743]: CRED_ACQ pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.110000 audit[4743]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4eae2800 a2=3 a3=0 items=0 ppid=1 pid=4743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:34.110000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:34.123363 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:34.137908 systemd-logind[1182]: New session 8 of user core. Feb 9 08:56:34.139448 systemd[1]: Started session-8.scope. Feb 9 08:56:34.144000 audit[4743]: USER_START pid=4743 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.146000 audit[4746]: CRED_ACQ pid=4746 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.531792 sshd[4743]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:34.533000 audit[4743]: USER_END pid=4743 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.534833 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 08:56:34.534969 kernel: audit: type=1106 audit(1707468994.533:347): pid=4743 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.538000 audit[4743]: CRED_DISP pid=4743 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.544640 kernel: audit: type=1104 audit(1707468994.538:348): pid=4743 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:34.545867 systemd[1]: sshd@7-143.198.159.117:22-139.178.89.65:39208.service: Deactivated successfully. Feb 9 08:56:34.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-143.198.159.117:22-139.178.89.65:39208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:34.547298 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 08:56:34.549119 systemd-logind[1182]: Session 8 logged out. Waiting for processes to exit. Feb 9 08:56:34.550586 kernel: audit: type=1131 audit(1707468994.545:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-143.198.159.117:22-139.178.89.65:39208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:34.550529 systemd-logind[1182]: Removed session 8. Feb 9 08:56:34.573617 env[1192]: time="2024-02-09T08:56:34.573521067Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:34.577588 env[1192]: time="2024-02-09T08:56:34.577419352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:34.581585 env[1192]: time="2024-02-09T08:56:34.581500938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:34.584641 env[1192]: time="2024-02-09T08:56:34.584555752Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:34.585702 env[1192]: time="2024-02-09T08:56:34.585663660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 08:56:34.589764 env[1192]: time="2024-02-09T08:56:34.587613626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 08:56:34.592210 env[1192]: time="2024-02-09T08:56:34.591351890Z" level=info msg="CreateContainer within sandbox \"563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 08:56:34.610118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253169613.mount: Deactivated successfully. Feb 9 08:56:34.621400 env[1192]: time="2024-02-09T08:56:34.621210294Z" level=info msg="CreateContainer within sandbox \"563bd7357093aad14af9cebdf1433c0a9778a4cb2e138d154b85e1649e80a0d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c61f6b15de50b8b6816998419081bfa7ccfda5db9bfce302119312c780d60ab6\"" Feb 9 08:56:34.624199 env[1192]: time="2024-02-09T08:56:34.622408415Z" level=info msg="StartContainer for \"c61f6b15de50b8b6816998419081bfa7ccfda5db9bfce302119312c780d60ab6\"" Feb 9 08:56:34.667652 systemd[1]: run-containerd-runc-k8s.io-c61f6b15de50b8b6816998419081bfa7ccfda5db9bfce302119312c780d60ab6-runc.HtahFz.mount: Deactivated successfully. Feb 9 08:56:34.744676 env[1192]: time="2024-02-09T08:56:34.744609802Z" level=info msg="StartContainer for \"c61f6b15de50b8b6816998419081bfa7ccfda5db9bfce302119312c780d60ab6\" returns successfully" Feb 9 08:56:35.204790 env[1192]: time="2024-02-09T08:56:35.204739139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:35.207983 env[1192]: time="2024-02-09T08:56:35.207938119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:35.210737 env[1192]: time="2024-02-09T08:56:35.210695922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:35.235223 env[1192]: time="2024-02-09T08:56:35.235155897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 08:56:35.246385 env[1192]: time="2024-02-09T08:56:35.246315234Z" level=info msg="CreateContainer within sandbox \"5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 08:56:35.256826 env[1192]: time="2024-02-09T08:56:35.249630948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 08:56:35.281573 env[1192]: time="2024-02-09T08:56:35.281498520Z" level=info msg="CreateContainer within sandbox \"5e3fc6e3d8154bc8856b6d0d5ca38c22ad8cccd720ec123d370b0c1e969c50e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7cc8b568abdc05df14988941475b0d0dbcef115dcc511e7926a59dabbd3d0e10\"" Feb 9 08:56:35.282296 env[1192]: time="2024-02-09T08:56:35.282265216Z" level=info msg="StartContainer for \"7cc8b568abdc05df14988941475b0d0dbcef115dcc511e7926a59dabbd3d0e10\"" Feb 9 08:56:35.473086 env[1192]: time="2024-02-09T08:56:35.472949767Z" level=info msg="StartContainer for \"7cc8b568abdc05df14988941475b0d0dbcef115dcc511e7926a59dabbd3d0e10\" returns successfully" Feb 9 08:56:35.501000 audit[4856]: NETFILTER_CFG table=filter:133 family=2 entries=8 op=nft_register_rule pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:35.504615 kernel: audit: type=1325 audit(1707468995.501:350): table=filter:133 family=2 entries=8 op=nft_register_rule pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:35.501000 audit[4856]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd44c94ef0 a2=0 a3=7ffd44c94edc items=0 ppid=2381 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:35.510590 kernel: audit: type=1300 audit(1707468995.501:350): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd44c94ef0 a2=0 a3=7ffd44c94edc items=0 ppid=2381 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:35.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:35.513637 kernel: audit: type=1327 audit(1707468995.501:350): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:35.516000 audit[4856]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:35.520618 kernel: audit: type=1325 audit(1707468995.516:351): table=nat:134 family=2 entries=78 op=nft_register_rule pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:35.516000 audit[4856]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd44c94ef0 a2=0 a3=7ffd44c94edc items=0 ppid=2381 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:35.528660 kernel: audit: type=1300 audit(1707468995.516:351): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd44c94ef0 a2=0 a3=7ffd44c94edc items=0 ppid=2381 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:35.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:35.533626 kernel: audit: type=1327 audit(1707468995.516:351): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:36.217532 kubelet[2165]: I0209 08:56:36.217475 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548fbb7d57-kg6ct" podStartSLOduration=-9.223372029638609e+09 pod.CreationTimestamp="2024-02-09 08:56:29 +0000 UTC" firstStartedPulling="2024-02-09 08:56:29.942706797 +0000 UTC m=+65.521261992" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:35.229907076 +0000 UTC m=+70.808462297" watchObservedRunningTime="2024-02-09 08:56:36.21616736 +0000 UTC m=+71.794722579" Feb 9 08:56:36.270000 audit[4885]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:36.274912 kernel: audit: type=1325 audit(1707468996.270:352): table=filter:135 family=2 entries=8 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:36.270000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe790875c0 a2=0 a3=7ffe790875ac items=0 ppid=2381 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:36.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:36.276000 audit[4885]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:36.276000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe790875c0 a2=0 a3=7ffe790875ac items=0 ppid=2381 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:36.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:39.538128 systemd[1]: Started sshd@8-143.198.159.117:22-139.178.89.65:49746.service. Feb 9 08:56:39.541794 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 08:56:39.541958 kernel: audit: type=1130 audit(1707468999.538:354): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.159.117:22-139.178.89.65:49746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:39.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.159.117:22-139.178.89.65:49746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:39.620000 audit[4886]: USER_ACCT pid=4886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.621812 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 49746 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:39.626678 kernel: audit: type=1101 audit(1707468999.620:355): pid=4886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.627000 audit[4886]: CRED_ACQ pid=4886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.636162 kernel: audit: type=1103 audit(1707468999.627:356): pid=4886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.636300 kernel: audit: type=1006 audit(1707468999.628:357): pid=4886 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 08:56:39.641050 kernel: audit: type=1300 audit(1707468999.628:357): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0fd08c80 a2=3 a3=0 items=0 ppid=1 pid=4886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:39.641135 kernel: audit: type=1327 audit(1707468999.628:357): proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:39.628000 audit[4886]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0fd08c80 a2=3 a3=0 items=0 ppid=1 pid=4886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:39.628000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:39.643705 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:39.651288 systemd[1]: Started session-9.scope. Feb 9 08:56:39.651812 systemd-logind[1182]: New session 9 of user core. Feb 9 08:56:39.664000 audit[4886]: USER_START pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.671818 kernel: audit: type=1105 audit(1707468999.664:358): pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.671000 audit[4889]: CRED_ACQ pid=4889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.676717 kernel: audit: type=1103 audit(1707468999.671:359): pid=4889 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.878620 sshd[4886]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:39.878000 audit[4886]: USER_END pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.882000 audit[4886]: CRED_DISP pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.889480 kernel: audit: type=1106 audit(1707468999.878:360): pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.890044 kernel: audit: type=1104 audit(1707468999.882:361): pid=4886 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:39.890161 systemd[1]: sshd@8-143.198.159.117:22-139.178.89.65:49746.service: Deactivated successfully. Feb 9 08:56:39.891702 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 08:56:39.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.159.117:22-139.178.89.65:49746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:39.892594 systemd-logind[1182]: Session 9 logged out. Waiting for processes to exit. Feb 9 08:56:39.893763 systemd-logind[1182]: Removed session 9. Feb 9 08:56:40.837084 kubelet[2165]: E0209 08:56:40.837047 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:44.885090 systemd[1]: Started sshd@9-143.198.159.117:22-139.178.89.65:49752.service. Feb 9 08:56:44.891625 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:56:44.891772 kernel: audit: type=1130 audit(1707469004.883:363): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.159.117:22-139.178.89.65:49752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:44.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.159.117:22-139.178.89.65:49752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:44.942000 audit[4905]: USER_ACCT pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.948195 sshd[4905]: Accepted publickey for core from 139.178.89.65 port 49752 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:44.948634 kernel: audit: type=1101 audit(1707469004.942:364): pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.948000 audit[4905]: CRED_ACQ pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.950701 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:44.955641 kernel: audit: type=1103 audit(1707469004.948:365): pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.955749 kernel: audit: type=1006 audit(1707469004.948:366): pid=4905 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 08:56:44.948000 audit[4905]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0368a10 a2=3 a3=0 items=0 ppid=1 pid=4905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:44.960129 kernel: audit: type=1300 audit(1707469004.948:366): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0368a10 a2=3 a3=0 items=0 ppid=1 pid=4905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:44.948000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:44.964184 kernel: audit: type=1327 audit(1707469004.948:366): proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:44.964135 systemd[1]: Started session-10.scope. Feb 9 08:56:44.965215 systemd-logind[1182]: New session 10 of user core. Feb 9 08:56:44.968000 audit[4905]: USER_START pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.978822 kernel: audit: type=1105 audit(1707469004.968:367): pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.978931 kernel: audit: type=1103 audit(1707469004.968:368): pid=4908 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:44.968000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:45.240662 sshd[4905]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:45.240000 audit[4905]: USER_END pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:45.249942 kernel: audit: type=1106 audit(1707469005.240:369): pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:45.250889 kernel: audit: type=1104 audit(1707469005.240:370): pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:45.240000 audit[4905]: CRED_DISP pid=4905 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:45.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.159.117:22-139.178.89.65:49752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:45.243955 systemd[1]: sshd@9-143.198.159.117:22-139.178.89.65:49752.service: Deactivated successfully. Feb 9 08:56:45.245048 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 08:56:45.246920 systemd-logind[1182]: Session 10 logged out. Waiting for processes to exit. Feb 9 08:56:45.247958 systemd-logind[1182]: Removed session 10. Feb 9 08:56:50.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.159.117:22-139.178.89.65:36342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:50.247826 systemd[1]: Started sshd@10-143.198.159.117:22-139.178.89.65:36342.service. Feb 9 08:56:50.249553 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:56:50.253762 kernel: audit: type=1130 audit(1707469010.246:372): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.159.117:22-139.178.89.65:36342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:50.308000 audit[4921]: USER_ACCT pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.310056 sshd[4921]: Accepted publickey for core from 139.178.89.65 port 36342 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:50.320000 audit[4921]: CRED_ACQ pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.322710 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:50.326387 kernel: audit: type=1101 audit(1707469010.308:373): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.326496 kernel: audit: type=1103 audit(1707469010.320:374): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.326523 kernel: audit: type=1006 audit(1707469010.320:375): pid=4921 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 08:56:50.320000 audit[4921]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf1707810 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:50.333470 kernel: audit: type=1300 audit(1707469010.320:375): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf1707810 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:50.333103 systemd[1]: Started session-11.scope. Feb 9 08:56:50.320000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:50.334275 systemd-logind[1182]: New session 11 of user core. Feb 9 08:56:50.336714 kernel: audit: type=1327 audit(1707469010.320:375): proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:50.341000 audit[4921]: USER_START pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.348625 kernel: audit: type=1105 audit(1707469010.341:376): pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.343000 audit[4924]: CRED_ACQ pid=4924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.352644 kernel: audit: type=1103 audit(1707469010.343:377): pid=4924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.519242 sshd[4921]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:50.520000 audit[4921]: USER_END pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.524063 systemd[1]: sshd@10-143.198.159.117:22-139.178.89.65:36342.service: Deactivated successfully. Feb 9 08:56:50.524990 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 08:56:50.527690 kernel: audit: type=1106 audit(1707469010.520:378): pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.528303 systemd-logind[1182]: Session 11 logged out. Waiting for processes to exit. Feb 9 08:56:50.520000 audit[4921]: CRED_DISP pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.159.117:22-139.178.89.65:36342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:50.533946 kernel: audit: type=1104 audit(1707469010.520:379): pid=4921 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:50.534004 systemd-logind[1182]: Removed session 11. Feb 9 08:56:52.837276 kubelet[2165]: E0209 08:56:52.837238 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:53.838204 kubelet[2165]: E0209 08:56:53.838153 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:54.506090 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.EZ9kjF.mount: Deactivated successfully. Feb 9 08:56:54.837903 kubelet[2165]: E0209 08:56:54.837653 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:56:55.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.159.117:22-139.178.89.65:36354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:55.526346 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:56:55.526425 kernel: audit: type=1130 audit(1707469015.523:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.159.117:22-139.178.89.65:36354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:55.524445 systemd[1]: Started sshd@11-143.198.159.117:22-139.178.89.65:36354.service. Feb 9 08:56:55.599000 audit[4961]: USER_ACCT pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.601822 sshd[4961]: Accepted publickey for core from 139.178.89.65 port 36354 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:55.605675 kernel: audit: type=1101 audit(1707469015.599:382): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.606000 audit[4961]: CRED_ACQ pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.613978 kernel: audit: type=1103 audit(1707469015.606:383): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.614097 kernel: audit: type=1006 audit(1707469015.606:384): pid=4961 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 08:56:55.606000 audit[4961]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc970aca40 a2=3 a3=0 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:55.619710 kernel: audit: type=1300 audit(1707469015.606:384): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc970aca40 a2=3 a3=0 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:55.619872 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:55.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:55.622595 kernel: audit: type=1327 audit(1707469015.606:384): proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:55.629828 systemd[1]: Started session-12.scope. Feb 9 08:56:55.630077 systemd-logind[1182]: New session 12 of user core. Feb 9 08:56:55.639000 audit[4961]: USER_START pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.646621 kernel: audit: type=1105 audit(1707469015.639:385): pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.642000 audit[4964]: CRED_ACQ pid=4964 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.654691 kernel: audit: type=1103 audit(1707469015.642:386): pid=4964 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.843308 sshd[4961]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:55.846000 audit[4961]: USER_END pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.850039 systemd[1]: Started sshd@12-143.198.159.117:22-139.178.89.65:36364.service. Feb 9 08:56:55.853707 kernel: audit: type=1106 audit(1707469015.846:387): pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.852000 audit[4961]: CRED_DISP pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.859710 kernel: audit: type=1104 audit(1707469015.852:388): pid=4961 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-143.198.159.117:22-139.178.89.65:36364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:55.862631 systemd-logind[1182]: Session 12 logged out. Waiting for processes to exit. Feb 9 08:56:55.863178 systemd[1]: sshd@11-143.198.159.117:22-139.178.89.65:36354.service: Deactivated successfully. Feb 9 08:56:55.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.159.117:22-139.178.89.65:36354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:55.864750 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 08:56:55.865597 systemd-logind[1182]: Removed session 12. Feb 9 08:56:55.913000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.915418 sshd[4973]: Accepted publickey for core from 139.178.89.65 port 36364 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:55.915000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.915000 audit[4973]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee45b87d0 a2=3 a3=0 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:55.915000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:55.917194 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:55.923136 systemd-logind[1182]: New session 13 of user core. Feb 9 08:56:55.924094 systemd[1]: Started session-13.scope. Feb 9 08:56:55.933000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:55.938000 audit[4978]: CRED_ACQ pid=4978 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.401127 systemd[1]: Started sshd@13-143.198.159.117:22-139.178.89.65:36376.service. Feb 9 08:56:57.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-143.198.159.117:22-139.178.89.65:36376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:57.407798 sshd[4973]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:57.417000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.418000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.423079 systemd[1]: sshd@12-143.198.159.117:22-139.178.89.65:36364.service: Deactivated successfully. Feb 9 08:56:57.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-143.198.159.117:22-139.178.89.65:36364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:57.424611 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 08:56:57.424632 systemd-logind[1182]: Session 13 logged out. Waiting for processes to exit. Feb 9 08:56:57.431020 systemd-logind[1182]: Removed session 13. Feb 9 08:56:57.488000 audit[4984]: USER_ACCT pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.490497 sshd[4984]: Accepted publickey for core from 139.178.89.65 port 36376 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:56:57.489000 audit[4984]: CRED_ACQ pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.489000 audit[4984]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff57534180 a2=3 a3=0 items=0 ppid=1 pid=4984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:57.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:56:57.492837 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:56:57.498648 systemd-logind[1182]: New session 14 of user core. Feb 9 08:56:57.499143 systemd[1]: Started session-14.scope. Feb 9 08:56:57.504000 audit[4984]: USER_START pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.507000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.725140 sshd[4984]: pam_unix(sshd:session): session closed for user core Feb 9 08:56:57.726000 audit[4984]: USER_END pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.727000 audit[4984]: CRED_DISP pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:56:57.731472 systemd-logind[1182]: Session 14 logged out. Waiting for processes to exit. Feb 9 08:56:57.731802 systemd[1]: sshd@13-143.198.159.117:22-139.178.89.65:36376.service: Deactivated successfully. Feb 9 08:56:57.732819 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 08:56:57.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-143.198.159.117:22-139.178.89.65:36376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:56:57.734198 systemd-logind[1182]: Removed session 14. Feb 9 08:56:59.518254 systemd[1]: run-containerd-runc-k8s.io-7cc8b568abdc05df14988941475b0d0dbcef115dcc511e7926a59dabbd3d0e10-runc.sIQekH.mount: Deactivated successfully. Feb 9 08:56:59.583594 kubelet[2165]: I0209 08:56:59.582544 2165 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548fbb7d57-5t5vs" podStartSLOduration=-9.223372006272299e+09 pod.CreationTimestamp="2024-02-09 08:56:29 +0000 UTC" firstStartedPulling="2024-02-09 08:56:29.989478257 +0000 UTC m=+65.568033450" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:56:36.21785096 +0000 UTC m=+71.796406155" watchObservedRunningTime="2024-02-09 08:56:59.582476512 +0000 UTC m=+95.161031727" Feb 9 08:56:59.660000 audit[5061]: NETFILTER_CFG table=filter:137 family=2 entries=7 op=nft_register_rule pid=5061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:59.660000 audit[5061]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff384e07c0 a2=0 a3=7fff384e07ac items=0 ppid=2381 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:59.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:59.663000 audit[5061]: NETFILTER_CFG table=nat:138 family=2 entries=85 op=nft_register_chain pid=5061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:59.663000 audit[5061]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7fff384e07c0 a2=0 a3=7fff384e07ac items=0 ppid=2381 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:59.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:59.745000 audit[5087]: NETFILTER_CFG table=filter:139 family=2 entries=6 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:59.745000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe91715450 a2=0 a3=7ffe9171543c items=0 ppid=2381 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:59.745000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:56:59.748000 audit[5087]: NETFILTER_CFG table=nat:140 family=2 entries=92 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:56:59.748000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe91715450 a2=0 a3=7ffe9171543c items=0 ppid=2381 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:56:59.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:01.659268 systemd[1]: run-containerd-runc-k8s.io-77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d-runc.7LBcHq.mount: Deactivated successfully. Feb 9 08:57:02.731394 systemd[1]: Started sshd@14-143.198.159.117:22-139.178.89.65:53890.service. Feb 9 08:57:02.737533 kernel: kauditd_printk_skb: 35 callbacks suppressed Feb 9 08:57:02.737714 kernel: audit: type=1130 audit(1707469022.730:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.159.117:22-139.178.89.65:53890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:02.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.159.117:22-139.178.89.65:53890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:02.783000 audit[5112]: USER_ACCT pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.785885 sshd[5112]: Accepted publickey for core from 139.178.89.65 port 53890 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:02.789000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.792073 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:02.795769 kernel: audit: type=1101 audit(1707469022.783:413): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.795892 kernel: audit: type=1103 audit(1707469022.789:414): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.798757 kernel: audit: type=1006 audit(1707469022.790:415): pid=5112 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 08:57:02.790000 audit[5112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecbd41ff0 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:02.803113 kernel: audit: type=1300 audit(1707469022.790:415): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecbd41ff0 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:02.790000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:02.805595 kernel: audit: type=1327 audit(1707469022.790:415): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:02.809191 systemd-logind[1182]: New session 15 of user core. Feb 9 08:57:02.810620 systemd[1]: Started session-15.scope. Feb 9 08:57:02.816000 audit[5112]: USER_START pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.819000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.829873 kernel: audit: type=1105 audit(1707469022.816:416): pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:02.830054 kernel: audit: type=1103 audit(1707469022.819:417): pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:03.023707 sshd[5112]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:03.024000 audit[5112]: USER_END pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:03.028367 systemd[1]: sshd@14-143.198.159.117:22-139.178.89.65:53890.service: Deactivated successfully. Feb 9 08:57:03.029508 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 08:57:03.032606 systemd-logind[1182]: Session 15 logged out. Waiting for processes to exit. Feb 9 08:57:03.024000 audit[5112]: CRED_DISP pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:03.033938 systemd-logind[1182]: Removed session 15. Feb 9 08:57:03.037271 kernel: audit: type=1106 audit(1707469023.024:418): pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:03.037397 kernel: audit: type=1104 audit(1707469023.024:419): pid=5112 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:03.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.159.117:22-139.178.89.65:53890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:08.026034 systemd[1]: Started sshd@15-143.198.159.117:22-139.178.89.65:53896.service. Feb 9 08:57:08.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.159.117:22-139.178.89.65:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:08.028192 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:08.028287 kernel: audit: type=1130 audit(1707469028.025:421): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.159.117:22-139.178.89.65:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:08.107000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.109867 sshd[5126]: Accepted publickey for core from 139.178.89.65 port 53896 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:08.114610 kernel: audit: type=1101 audit(1707469028.107:422): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.117387 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:08.113000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.122672 kernel: audit: type=1103 audit(1707469028.113:423): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.122809 kernel: audit: type=1006 audit(1707469028.113:424): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 08:57:08.130648 systemd-logind[1182]: New session 16 of user core. Feb 9 08:57:08.113000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff7106070 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:08.132179 systemd[1]: Started session-16.scope. Feb 9 08:57:08.135599 kernel: audit: type=1300 audit(1707469028.113:424): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff7106070 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:08.113000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:08.140000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.151285 kernel: audit: type=1327 audit(1707469028.113:424): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:08.151428 kernel: audit: type=1105 audit(1707469028.140:425): pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.142000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.165604 kernel: audit: type=1103 audit(1707469028.142:426): pid=5129 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.347403 sshd[5126]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:08.347000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.347000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.355772 systemd[1]: sshd@15-143.198.159.117:22-139.178.89.65:53896.service: Deactivated successfully. Feb 9 08:57:08.356738 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 08:57:08.357631 kernel: audit: type=1106 audit(1707469028.347:427): pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.357704 kernel: audit: type=1104 audit(1707469028.347:428): pid=5126 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:08.358325 systemd-logind[1182]: Session 16 logged out. Waiting for processes to exit. Feb 9 08:57:08.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.159.117:22-139.178.89.65:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:08.360224 systemd-logind[1182]: Removed session 16. Feb 9 08:57:13.353157 systemd[1]: Started sshd@16-143.198.159.117:22-139.178.89.65:57228.service. Feb 9 08:57:13.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.159.117:22-139.178.89.65:57228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:13.356582 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:13.356687 kernel: audit: type=1130 audit(1707469033.352:430): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.159.117:22-139.178.89.65:57228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:13.407000 audit[5143]: USER_ACCT pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.409347 sshd[5143]: Accepted publickey for core from 139.178.89.65 port 57228 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:13.413593 kernel: audit: type=1101 audit(1707469033.407:431): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.412000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.414938 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:13.418701 kernel: audit: type=1103 audit(1707469033.412:432): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.418807 kernel: audit: type=1006 audit(1707469033.412:433): pid=5143 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 08:57:13.412000 audit[5143]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebd695d40 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:13.427610 kernel: audit: type=1300 audit(1707469033.412:433): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebd695d40 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:13.412000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:13.430708 kernel: audit: type=1327 audit(1707469033.412:433): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:13.433653 systemd-logind[1182]: New session 17 of user core. Feb 9 08:57:13.434488 systemd[1]: Started session-17.scope. Feb 9 08:57:13.440000 audit[5143]: USER_START pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.455717 kernel: audit: type=1105 audit(1707469033.440:434): pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.454000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.460592 kernel: audit: type=1103 audit(1707469033.454:435): pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.584991 sshd[5143]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:13.584000 audit[5143]: USER_END pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.595974 kernel: audit: type=1106 audit(1707469033.584:436): pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.596062 kernel: audit: type=1104 audit(1707469033.584:437): pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.584000 audit[5143]: CRED_DISP pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:13.589189 systemd[1]: sshd@16-143.198.159.117:22-139.178.89.65:57228.service: Deactivated successfully. Feb 9 08:57:13.590146 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 08:57:13.596393 systemd-logind[1182]: Session 17 logged out. Waiting for processes to exit. Feb 9 08:57:13.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.159.117:22-139.178.89.65:57228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:13.597709 systemd-logind[1182]: Removed session 17. Feb 9 08:57:16.836920 kubelet[2165]: E0209 08:57:16.836874 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:57:18.590525 systemd[1]: Started sshd@17-143.198.159.117:22-139.178.89.65:36088.service. Feb 9 08:57:18.595632 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:18.595767 kernel: audit: type=1130 audit(1707469038.589:439): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.159.117:22-139.178.89.65:36088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:18.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.159.117:22-139.178.89.65:36088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:18.658826 sshd[5157]: Accepted publickey for core from 139.178.89.65 port 36088 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:18.660653 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:18.657000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.665820 kernel: audit: type=1101 audit(1707469038.657:440): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.665923 kernel: audit: type=1103 audit(1707469038.658:441): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.658000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.674215 kernel: audit: type=1006 audit(1707469038.658:442): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 08:57:18.674414 kernel: audit: type=1300 audit(1707469038.658:442): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecdc66a60 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:18.658000 audit[5157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecdc66a60 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:18.679734 kernel: audit: type=1327 audit(1707469038.658:442): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:18.658000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:18.682843 systemd-logind[1182]: New session 18 of user core. Feb 9 08:57:18.684424 systemd[1]: Started session-18.scope. Feb 9 08:57:18.697589 kernel: audit: type=1105 audit(1707469038.689:443): pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.689000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.696000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.703593 kernel: audit: type=1103 audit(1707469038.696:444): pid=5160 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.870746 sshd[5157]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:18.873666 systemd[1]: Started sshd@18-143.198.159.117:22-139.178.89.65:36100.service. Feb 9 08:57:18.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-143.198.159.117:22-139.178.89.65:36100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:18.878611 kernel: audit: type=1130 audit(1707469038.872:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-143.198.159.117:22-139.178.89.65:36100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:18.878000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.889434 kernel: audit: type=1106 audit(1707469038.878:446): pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.886000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.890553 systemd[1]: sshd@17-143.198.159.117:22-139.178.89.65:36088.service: Deactivated successfully. Feb 9 08:57:18.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.159.117:22-139.178.89.65:36088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:18.891694 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 08:57:18.894240 systemd-logind[1182]: Session 18 logged out. Waiting for processes to exit. Feb 9 08:57:18.896455 systemd-logind[1182]: Removed session 18. Feb 9 08:57:18.962000 audit[5168]: USER_ACCT pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.963859 sshd[5168]: Accepted publickey for core from 139.178.89.65 port 36100 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:18.965000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.965000 audit[5168]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9a46ffb0 a2=3 a3=0 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:18.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:18.968063 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:18.977953 systemd[1]: Started session-19.scope. Feb 9 08:57:18.978399 systemd-logind[1182]: New session 19 of user core. Feb 9 08:57:18.988000 audit[5168]: USER_START pid=5168 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:18.990000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.435435 sshd[5168]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:19.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-143.198.159.117:22-139.178.89.65:36104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:19.438314 systemd[1]: Started sshd@19-143.198.159.117:22-139.178.89.65:36104.service. Feb 9 08:57:19.441000 audit[5168]: USER_END pid=5168 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.441000 audit[5168]: CRED_DISP pid=5168 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.447349 systemd-logind[1182]: Session 19 logged out. Waiting for processes to exit. Feb 9 08:57:19.449357 systemd[1]: sshd@18-143.198.159.117:22-139.178.89.65:36100.service: Deactivated successfully. Feb 9 08:57:19.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-143.198.159.117:22-139.178.89.65:36100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:19.450811 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 08:57:19.454073 systemd-logind[1182]: Removed session 19. Feb 9 08:57:19.519157 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 36104 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:19.517000 audit[5179]: USER_ACCT pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.519000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.519000 audit[5179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdac227020 a2=3 a3=0 items=0 ppid=1 pid=5179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:19.519000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:19.522381 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:19.527671 systemd-logind[1182]: New session 20 of user core. Feb 9 08:57:19.528517 systemd[1]: Started session-20.scope. Feb 9 08:57:19.541000 audit[5179]: USER_START pid=5179 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:19.542000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.135318 systemd[1]: Started sshd@20-143.198.159.117:22-139.178.89.65:36106.service. Feb 9 08:57:22.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-143.198.159.117:22-139.178.89.65:36106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:22.137658 sshd[5179]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:22.142000 audit[5179]: USER_END pid=5179 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.143000 audit[5179]: CRED_DISP pid=5179 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.149825 systemd[1]: sshd@19-143.198.159.117:22-139.178.89.65:36104.service: Deactivated successfully. Feb 9 08:57:22.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-143.198.159.117:22-139.178.89.65:36104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:22.151038 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 08:57:22.154774 systemd-logind[1182]: Session 20 logged out. Waiting for processes to exit. Feb 9 08:57:22.158667 systemd-logind[1182]: Removed session 20. Feb 9 08:57:22.223000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.225836 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 36106 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:22.225000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.225000 audit[5200]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc62cd6260 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.225000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:22.228490 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:22.236501 systemd-logind[1182]: New session 21 of user core. Feb 9 08:57:22.237483 systemd[1]: Started session-21.scope. Feb 9 08:57:22.243000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.245000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.286000 audit[5228]: NETFILTER_CFG table=filter:141 family=2 entries=18 op=nft_register_rule pid=5228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:22.286000 audit[5228]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff2cf55c50 a2=0 a3=7fff2cf55c3c items=0 ppid=2381 pid=5228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:22.290000 audit[5228]: NETFILTER_CFG table=nat:142 family=2 entries=94 op=nft_register_rule pid=5228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:22.290000 audit[5228]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff2cf55c50 a2=0 a3=7fff2cf55c3c items=0 ppid=2381 pid=5228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:22.391000 audit[5258]: NETFILTER_CFG table=filter:143 family=2 entries=30 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:22.391000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe1910cc80 a2=0 a3=7ffe1910cc6c items=0 ppid=2381 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:22.394000 audit[5258]: NETFILTER_CFG table=nat:144 family=2 entries=94 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:22.394000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe1910cc80 a2=0 a3=7ffe1910cc6c items=0 ppid=2381 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:22.883779 sshd[5200]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:22.885590 systemd[1]: Started sshd@21-143.198.159.117:22-139.178.89.65:36122.service. Feb 9 08:57:22.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-143.198.159.117:22-139.178.89.65:36122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:22.891000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.891000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.895932 systemd[1]: sshd@20-143.198.159.117:22-139.178.89.65:36106.service: Deactivated successfully. Feb 9 08:57:22.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-143.198.159.117:22-139.178.89.65:36106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:22.898254 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 08:57:22.898790 systemd-logind[1182]: Session 21 logged out. Waiting for processes to exit. Feb 9 08:57:22.902238 systemd-logind[1182]: Removed session 21. Feb 9 08:57:22.961000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.963425 sshd[5259]: Accepted publickey for core from 139.178.89.65 port 36122 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:22.963000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.963000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0fa85620 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:22.963000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:22.965716 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:22.972166 systemd[1]: Started session-22.scope. Feb 9 08:57:22.972681 systemd-logind[1182]: New session 22 of user core. Feb 9 08:57:22.978000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:22.980000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:23.141000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:23.141000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:23.141328 sshd[5259]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:23.146224 systemd-logind[1182]: Session 22 logged out. Waiting for processes to exit. Feb 9 08:57:23.146470 systemd[1]: sshd@21-143.198.159.117:22-139.178.89.65:36122.service: Deactivated successfully. Feb 9 08:57:23.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-143.198.159.117:22-139.178.89.65:36122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:23.147643 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 08:57:23.150227 systemd-logind[1182]: Removed session 22. Feb 9 08:57:24.513261 systemd[1]: run-containerd-runc-k8s.io-f952ef9328aa94ff4fb3ff395812f615aaa95ad833e3d6c2ad9db04264f273b5-runc.FRhqBP.mount: Deactivated successfully. Feb 9 08:57:26.432535 systemd[1]: run-containerd-runc-k8s.io-77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d-runc.BGOvVK.mount: Deactivated successfully. Feb 9 08:57:28.146654 systemd[1]: Started sshd@22-143.198.159.117:22-139.178.89.65:52478.service. Feb 9 08:57:28.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.159.117:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:28.152719 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 08:57:28.152856 kernel: audit: type=1130 audit(1707469048.145:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.159.117:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:28.195000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.198815 sshd[5316]: Accepted publickey for core from 139.178.89.65 port 52478 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:28.201622 kernel: audit: type=1101 audit(1707469048.195:489): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.201000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.205848 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:28.209352 kernel: audit: type=1103 audit(1707469048.201:490): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.209511 kernel: audit: type=1006 audit(1707469048.203:491): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 08:57:28.209713 kernel: audit: type=1300 audit(1707469048.203:491): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc786f0560 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:28.203000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc786f0560 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:28.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:28.215661 kernel: audit: type=1327 audit(1707469048.203:491): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:28.220284 systemd-logind[1182]: New session 23 of user core. Feb 9 08:57:28.220372 systemd[1]: Started session-23.scope. Feb 9 08:57:28.225000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.227000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.237727 kernel: audit: type=1105 audit(1707469048.225:492): pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.237861 kernel: audit: type=1103 audit(1707469048.227:493): pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.362319 sshd[5316]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:28.362000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.367031 systemd[1]: sshd@22-143.198.159.117:22-139.178.89.65:52478.service: Deactivated successfully. Feb 9 08:57:28.367963 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 08:57:28.370664 kernel: audit: type=1106 audit(1707469048.362:494): pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.370772 kernel: audit: type=1104 audit(1707469048.362:495): pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.362000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:28.374868 systemd-logind[1182]: Session 23 logged out. Waiting for processes to exit. Feb 9 08:57:28.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.159.117:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:28.376388 systemd-logind[1182]: Removed session 23. Feb 9 08:57:29.492548 systemd[1]: run-containerd-runc-k8s.io-7cc8b568abdc05df14988941475b0d0dbcef115dcc511e7926a59dabbd3d0e10-runc.10xVf1.mount: Deactivated successfully. Feb 9 08:57:29.492867 systemd[1]: run-containerd-runc-k8s.io-c61f6b15de50b8b6816998419081bfa7ccfda5db9bfce302119312c780d60ab6-runc.f35JV6.mount: Deactivated successfully. Feb 9 08:57:30.482000 audit[5393]: NETFILTER_CFG table=filter:145 family=2 entries=18 op=nft_register_rule pid=5393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:30.482000 audit[5393]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc045fe3a0 a2=0 a3=7ffc045fe38c items=0 ppid=2381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:30.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:30.489000 audit[5393]: NETFILTER_CFG table=nat:146 family=2 entries=178 op=nft_register_chain pid=5393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 08:57:30.489000 audit[5393]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffc045fe3a0 a2=0 a3=7ffc045fe38c items=0 ppid=2381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:30.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 08:57:30.838023 kubelet[2165]: E0209 08:57:30.837077 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:57:31.652136 systemd[1]: run-containerd-runc-k8s.io-77763656c442e6e0bce1b5105f42348a6f1b8a866a8ed1231308bb7e184c880d-runc.3ilbzG.mount: Deactivated successfully. Feb 9 08:57:33.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.159.117:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:33.367681 systemd[1]: Started sshd@23-143.198.159.117:22-139.178.89.65:52494.service. Feb 9 08:57:33.369112 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 08:57:33.369173 kernel: audit: type=1130 audit(1707469053.366:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.159.117:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:33.437000 audit[5420]: USER_ACCT pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.439255 sshd[5420]: Accepted publickey for core from 139.178.89.65 port 52494 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:33.441000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.447541 kernel: audit: type=1101 audit(1707469053.437:500): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.447787 kernel: audit: type=1103 audit(1707469053.441:501): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.447825 kernel: audit: type=1006 audit(1707469053.441:502): pid=5420 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 08:57:33.448131 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:33.441000 audit[5420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb52ce410 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:33.455807 kernel: audit: type=1300 audit(1707469053.441:502): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb52ce410 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:33.455945 kernel: audit: type=1327 audit(1707469053.441:502): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:33.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:33.455377 systemd[1]: Started session-24.scope. Feb 9 08:57:33.457187 systemd-logind[1182]: New session 24 of user core. Feb 9 08:57:33.462000 audit[5420]: USER_START pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.468673 kernel: audit: type=1105 audit(1707469053.462:503): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.468819 kernel: audit: type=1103 audit(1707469053.464:504): pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.464000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.615827 sshd[5420]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:33.616000 audit[5420]: USER_END pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.623624 kernel: audit: type=1106 audit(1707469053.616:505): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.622000 audit[5420]: CRED_DISP pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.631005 kernel: audit: type=1104 audit(1707469053.622:506): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:33.628580 systemd[1]: sshd@23-143.198.159.117:22-139.178.89.65:52494.service: Deactivated successfully. Feb 9 08:57:33.629711 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 08:57:33.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.159.117:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:33.632510 systemd-logind[1182]: Session 24 logged out. Waiting for processes to exit. Feb 9 08:57:33.634449 systemd-logind[1182]: Removed session 24. Feb 9 08:57:38.617979 systemd[1]: Started sshd@24-143.198.159.117:22-139.178.89.65:56662.service. Feb 9 08:57:38.624807 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:38.626108 kernel: audit: type=1130 audit(1707469058.616:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-143.198.159.117:22-139.178.89.65:56662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:38.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-143.198.159.117:22-139.178.89.65:56662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:38.675000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.678957 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 56662 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:38.681615 kernel: audit: type=1101 audit(1707469058.675:509): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.681972 kernel: audit: type=1103 audit(1707469058.680:510): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.680000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.682733 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:38.691988 kernel: audit: type=1006 audit(1707469058.680:511): pid=5433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 08:57:38.692105 kernel: audit: type=1300 audit(1707469058.680:511): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedd593b30 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:38.680000 audit[5433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedd593b30 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:38.693954 systemd[1]: Started session-25.scope. Feb 9 08:57:38.695207 systemd-logind[1182]: New session 25 of user core. Feb 9 08:57:38.680000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:38.697624 kernel: audit: type=1327 audit(1707469058.680:511): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:38.701000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.708909 kernel: audit: type=1105 audit(1707469058.701:512): pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.707000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.713672 kernel: audit: type=1103 audit(1707469058.707:513): pid=5436 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.842503 sshd[5433]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:38.842000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.843000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.852611 kernel: audit: type=1106 audit(1707469058.842:514): pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.852782 kernel: audit: type=1104 audit(1707469058.843:515): pid=5433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:38.853033 systemd[1]: sshd@24-143.198.159.117:22-139.178.89.65:56662.service: Deactivated successfully. Feb 9 08:57:38.854220 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 08:57:38.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-143.198.159.117:22-139.178.89.65:56662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:38.855631 systemd-logind[1182]: Session 25 logged out. Waiting for processes to exit. Feb 9 08:57:38.857214 systemd-logind[1182]: Removed session 25. Feb 9 08:57:43.847865 systemd[1]: Started sshd@25-143.198.159.117:22-139.178.89.65:56678.service. Feb 9 08:57:43.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-143.198.159.117:22-139.178.89.65:56678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:43.849074 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:43.849143 kernel: audit: type=1130 audit(1707469063.846:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-143.198.159.117:22-139.178.89.65:56678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:43.907000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.909762 sshd[5448]: Accepted publickey for core from 139.178.89.65 port 56678 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:43.922720 kernel: audit: type=1101 audit(1707469063.907:518): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.922000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.928649 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:43.929809 kernel: audit: type=1103 audit(1707469063.922:519): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.929852 kernel: audit: type=1006 audit(1707469063.922:520): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 08:57:43.922000 audit[5448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf002dba0 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:43.938683 kernel: audit: type=1300 audit(1707469063.922:520): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf002dba0 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:43.938777 kernel: audit: type=1327 audit(1707469063.922:520): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:43.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:43.942016 systemd[1]: Started session-26.scope. Feb 9 08:57:43.942787 systemd-logind[1182]: New session 26 of user core. Feb 9 08:57:43.949000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.951000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.960023 kernel: audit: type=1105 audit(1707469063.949:521): pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:43.960149 kernel: audit: type=1103 audit(1707469063.951:522): pid=5451 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:44.082468 sshd[5448]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:44.081000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:44.081000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:44.089522 systemd[1]: sshd@25-143.198.159.117:22-139.178.89.65:56678.service: Deactivated successfully. Feb 9 08:57:44.090404 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 08:57:44.092358 kernel: audit: type=1106 audit(1707469064.081:523): pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:44.092446 kernel: audit: type=1104 audit(1707469064.081:524): pid=5448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:44.093051 systemd-logind[1182]: Session 26 logged out. Waiting for processes to exit. Feb 9 08:57:44.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-143.198.159.117:22-139.178.89.65:56678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:44.094432 systemd-logind[1182]: Removed session 26. Feb 9 08:57:44.839216 kubelet[2165]: E0209 08:57:44.839181 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:57:48.838001 kubelet[2165]: E0209 08:57:48.837890 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:57:48.839587 kubelet[2165]: E0209 08:57:48.839281 2165 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 9 08:57:49.087658 systemd[1]: Started sshd@26-143.198.159.117:22-139.178.89.65:38084.service. Feb 9 08:57:49.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-143.198.159.117:22-139.178.89.65:38084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:49.094392 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:49.094464 kernel: audit: type=1130 audit(1707469069.087:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-143.198.159.117:22-139.178.89.65:38084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:49.140000 audit[5473]: USER_ACCT pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.146837 kernel: audit: type=1101 audit(1707469069.140:527): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.146946 sshd[5473]: Accepted publickey for core from 139.178.89.65 port 38084 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:49.146000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.150277 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:49.155045 kernel: audit: type=1103 audit(1707469069.146:528): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.155142 kernel: audit: type=1006 audit(1707469069.146:529): pid=5473 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 08:57:49.155176 kernel: audit: type=1300 audit(1707469069.146:529): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5aa90020 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:49.146000 audit[5473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5aa90020 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:49.156575 systemd[1]: Started session-27.scope. Feb 9 08:57:49.157583 systemd-logind[1182]: New session 27 of user core. Feb 9 08:57:49.146000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:49.162951 kernel: audit: type=1327 audit(1707469069.146:529): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:49.161000 audit[5473]: USER_START pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.169743 kernel: audit: type=1105 audit(1707469069.161:530): pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.162000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.174863 kernel: audit: type=1103 audit(1707469069.162:531): pid=5477 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.305420 sshd[5473]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:49.305000 audit[5473]: USER_END pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.306000 audit[5473]: CRED_DISP pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.316614 kernel: audit: type=1106 audit(1707469069.305:532): pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.316760 kernel: audit: type=1104 audit(1707469069.306:533): pid=5473 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:49.317007 systemd[1]: sshd@26-143.198.159.117:22-139.178.89.65:38084.service: Deactivated successfully. Feb 9 08:57:49.318759 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 08:57:49.319354 systemd-logind[1182]: Session 27 logged out. Waiting for processes to exit. Feb 9 08:57:49.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-143.198.159.117:22-139.178.89.65:38084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:49.321331 systemd-logind[1182]: Removed session 27. Feb 9 08:57:54.309768 systemd[1]: Started sshd@27-143.198.159.117:22-139.178.89.65:38086.service. Feb 9 08:57:54.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-143.198.159.117:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:54.312152 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 08:57:54.312249 kernel: audit: type=1130 audit(1707469074.308:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-143.198.159.117:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:54.374000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.376002 sshd[5490]: Accepted publickey for core from 139.178.89.65 port 38086 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00 Feb 9 08:57:54.380640 kernel: audit: type=1101 audit(1707469074.374:536): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.380797 kernel: audit: type=1103 audit(1707469074.379:537): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.379000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.382207 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:57:54.388038 kernel: audit: type=1006 audit(1707469074.380:538): pid=5490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 08:57:54.388705 kernel: audit: type=1300 audit(1707469074.380:538): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff512bd260 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:54.380000 audit[5490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff512bd260 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 08:57:54.380000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:54.394425 kernel: audit: type=1327 audit(1707469074.380:538): proctitle=737368643A20636F7265205B707269765D Feb 9 08:57:54.399860 systemd-logind[1182]: New session 28 of user core. Feb 9 08:57:54.400629 systemd[1]: Started session-28.scope. Feb 9 08:57:54.406000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.414675 kernel: audit: type=1105 audit(1707469074.406:539): pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.414761 kernel: audit: type=1103 audit(1707469074.412:540): pid=5493 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.412000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.588372 sshd[5490]: pam_unix(sshd:session): session closed for user core Feb 9 08:57:54.588000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.595722 kernel: audit: type=1106 audit(1707469074.588:541): pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.594000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.598366 systemd[1]: sshd@27-143.198.159.117:22-139.178.89.65:38086.service: Deactivated successfully. Feb 9 08:57:54.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-143.198.159.117:22-139.178.89.65:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 08:57:54.600761 kernel: audit: type=1104 audit(1707469074.594:542): pid=5490 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 08:57:54.601021 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 08:57:54.602416 systemd-logind[1182]: Session 28 logged out. Waiting for processes to exit. Feb 9 08:57:54.604137 systemd-logind[1182]: Removed session 28.