Feb 8 23:21:32.799881 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:21:32.799900 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:21:32.799908 kernel: BIOS-provided physical RAM map: Feb 8 23:21:32.799913 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:21:32.799919 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:21:32.799924 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:21:32.799931 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 8 23:21:32.799936 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 8 23:21:32.799943 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:21:32.799949 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:21:32.799954 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 8 23:21:32.799959 kernel: NX (Execute Disable) protection: active Feb 8 23:21:32.799965 kernel: SMBIOS 2.8 present. Feb 8 23:21:32.799970 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 8 23:21:32.799978 kernel: Hypervisor detected: KVM Feb 8 23:21:32.799984 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:21:32.799990 kernel: kvm-clock: cpu 0, msr 99faa001, primary cpu clock Feb 8 23:21:32.799996 kernel: kvm-clock: using sched offset of 2125650184 cycles Feb 8 23:21:32.800002 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:21:32.800008 kernel: tsc: Detected 2794.750 MHz processor Feb 8 23:21:32.800014 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:21:32.800021 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:21:32.800027 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 8 23:21:32.800034 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:21:32.800040 kernel: Using GB pages for direct mapping Feb 8 23:21:32.800046 kernel: ACPI: Early table checksum verification disabled Feb 8 23:21:32.800052 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 8 23:21:32.800058 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800064 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800070 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800076 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 8 23:21:32.800081 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800089 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800095 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:21:32.800101 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 8 23:21:32.800107 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 8 23:21:32.800112 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 8 23:21:32.800118 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 8 23:21:32.800124 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 8 23:21:32.800130 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 8 23:21:32.800140 kernel: No NUMA configuration found Feb 8 23:21:32.800146 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 8 23:21:32.800153 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 8 23:21:32.800159 kernel: Zone ranges: Feb 8 23:21:32.800166 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:21:32.800172 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 8 23:21:32.800180 kernel: Normal empty Feb 8 23:21:32.800186 kernel: Movable zone start for each node Feb 8 23:21:32.800200 kernel: Early memory node ranges Feb 8 23:21:32.800206 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:21:32.800213 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 8 23:21:32.800219 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 8 23:21:32.800225 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:21:32.800232 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:21:32.800238 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 8 23:21:32.800245 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:21:32.800252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:21:32.800258 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:21:32.800265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:21:32.800272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:21:32.800278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:21:32.800284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:21:32.800291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:21:32.800297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:21:32.800305 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 8 23:21:32.800312 kernel: TSC deadline timer available Feb 8 23:21:32.800318 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 8 23:21:32.800324 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 8 23:21:32.800331 kernel: kvm-guest: setup PV sched yield Feb 8 23:21:32.800337 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 8 23:21:32.800343 kernel: Booting paravirtualized kernel on KVM Feb 8 23:21:32.800350 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:21:32.800357 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 8 23:21:32.800363 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 8 23:21:32.800370 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 8 23:21:32.800376 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 8 23:21:32.800383 kernel: kvm-guest: setup async PF for cpu 0 Feb 8 23:21:32.800389 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 8 23:21:32.800395 kernel: kvm-guest: PV spinlocks enabled Feb 8 23:21:32.800402 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:21:32.800408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 8 23:21:32.800414 kernel: Policy zone: DMA32 Feb 8 23:21:32.800422 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:21:32.800430 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:21:32.800437 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:21:32.800443 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:21:32.800450 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:21:32.800457 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 8 23:21:32.800463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 8 23:21:32.800470 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:21:32.800476 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:21:32.800484 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:21:32.800491 kernel: rcu: RCU event tracing is enabled. Feb 8 23:21:32.800497 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 8 23:21:32.800504 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:21:32.800510 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:21:32.800516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:21:32.800523 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 8 23:21:32.800529 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 8 23:21:32.800536 kernel: random: crng init done Feb 8 23:21:32.800543 kernel: Console: colour VGA+ 80x25 Feb 8 23:21:32.800550 kernel: printk: console [ttyS0] enabled Feb 8 23:21:32.800556 kernel: ACPI: Core revision 20210730 Feb 8 23:21:32.800563 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 8 23:21:32.800569 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:21:32.800575 kernel: x2apic enabled Feb 8 23:21:32.800582 kernel: Switched APIC routing to physical x2apic. Feb 8 23:21:32.800588 kernel: kvm-guest: setup PV IPIs Feb 8 23:21:32.800594 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:21:32.800602 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:21:32.800609 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 8 23:21:32.800615 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 8 23:21:32.800621 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 8 23:21:32.800628 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 8 23:21:32.800635 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:21:32.800641 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:21:32.800647 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:21:32.800654 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:21:32.800667 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 8 23:21:32.800674 kernel: RETBleed: Mitigation: untrained return thunk Feb 8 23:21:32.800680 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 8 23:21:32.800688 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 8 23:21:32.800695 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:21:32.800702 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:21:32.800709 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:21:32.800715 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:21:32.800722 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 8 23:21:32.800730 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:21:32.800737 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:21:32.800744 kernel: LSM: Security Framework initializing Feb 8 23:21:32.800751 kernel: SELinux: Initializing. Feb 8 23:21:32.800757 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:21:32.800783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:21:32.800790 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 8 23:21:32.800798 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 8 23:21:32.800805 kernel: ... version: 0 Feb 8 23:21:32.800811 kernel: ... bit width: 48 Feb 8 23:21:32.800818 kernel: ... generic registers: 6 Feb 8 23:21:32.800825 kernel: ... value mask: 0000ffffffffffff Feb 8 23:21:32.800831 kernel: ... max period: 00007fffffffffff Feb 8 23:21:32.800838 kernel: ... fixed-purpose events: 0 Feb 8 23:21:32.800845 kernel: ... event mask: 000000000000003f Feb 8 23:21:32.800851 kernel: signal: max sigframe size: 1776 Feb 8 23:21:32.800858 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:21:32.800866 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:21:32.800873 kernel: x86: Booting SMP configuration: Feb 8 23:21:32.800879 kernel: .... node #0, CPUs: #1 Feb 8 23:21:32.800886 kernel: kvm-clock: cpu 1, msr 99faa041, secondary cpu clock Feb 8 23:21:32.800893 kernel: kvm-guest: setup async PF for cpu 1 Feb 8 23:21:32.800899 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 8 23:21:32.800906 kernel: #2 Feb 8 23:21:32.800913 kernel: kvm-clock: cpu 2, msr 99faa081, secondary cpu clock Feb 8 23:21:32.800919 kernel: kvm-guest: setup async PF for cpu 2 Feb 8 23:21:32.800928 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 8 23:21:32.800934 kernel: #3 Feb 8 23:21:32.800941 kernel: kvm-clock: cpu 3, msr 99faa0c1, secondary cpu clock Feb 8 23:21:32.800947 kernel: kvm-guest: setup async PF for cpu 3 Feb 8 23:21:32.800954 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 8 23:21:32.800961 kernel: smp: Brought up 1 node, 4 CPUs Feb 8 23:21:32.800967 kernel: smpboot: Max logical packages: 1 Feb 8 23:21:32.800974 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 8 23:21:32.800981 kernel: devtmpfs: initialized Feb 8 23:21:32.800988 kernel: x86/mm: Memory block size: 128MB Feb 8 23:21:32.800995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:21:32.801002 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 8 23:21:32.801009 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:21:32.801015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:21:32.801022 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:21:32.801029 kernel: audit: type=2000 audit(1707434492.260:1): state=initialized audit_enabled=0 res=1 Feb 8 23:21:32.801035 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:21:32.801042 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:21:32.801050 kernel: cpuidle: using governor menu Feb 8 23:21:32.801057 kernel: ACPI: bus type PCI registered Feb 8 23:21:32.801064 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:21:32.801070 kernel: dca service started, version 1.12.1 Feb 8 23:21:32.801077 kernel: PCI: Using configuration type 1 for base access Feb 8 23:21:32.801084 kernel: PCI: Using configuration type 1 for extended access Feb 8 23:21:32.801090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:21:32.801097 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:21:32.801104 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:21:32.801112 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:21:32.801118 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:21:32.801125 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:21:32.801132 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:21:32.801138 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:21:32.801145 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:21:32.801152 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:21:32.801158 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:21:32.801165 kernel: ACPI: Interpreter enabled Feb 8 23:21:32.801173 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:21:32.801180 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:21:32.801186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:21:32.801199 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:21:32.801206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:21:32.801312 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:21:32.801323 kernel: acpiphp: Slot [3] registered Feb 8 23:21:32.801330 kernel: acpiphp: Slot [4] registered Feb 8 23:21:32.801339 kernel: acpiphp: Slot [5] registered Feb 8 23:21:32.801345 kernel: acpiphp: Slot [6] registered Feb 8 23:21:32.801352 kernel: acpiphp: Slot [7] registered Feb 8 23:21:32.801359 kernel: acpiphp: Slot [8] registered Feb 8 23:21:32.801365 kernel: acpiphp: Slot [9] registered Feb 8 23:21:32.801372 kernel: acpiphp: Slot [10] registered Feb 8 23:21:32.801379 kernel: acpiphp: Slot [11] registered Feb 8 23:21:32.801385 kernel: acpiphp: Slot [12] registered Feb 8 23:21:32.801392 kernel: acpiphp: Slot [13] registered Feb 8 23:21:32.801399 kernel: acpiphp: Slot [14] registered Feb 8 23:21:32.801407 kernel: acpiphp: Slot [15] registered Feb 8 23:21:32.801414 kernel: acpiphp: Slot [16] registered Feb 8 23:21:32.801421 kernel: acpiphp: Slot [17] registered Feb 8 23:21:32.801427 kernel: acpiphp: Slot [18] registered Feb 8 23:21:32.801434 kernel: acpiphp: Slot [19] registered Feb 8 23:21:32.801441 kernel: acpiphp: Slot [20] registered Feb 8 23:21:32.801448 kernel: acpiphp: Slot [21] registered Feb 8 23:21:32.801454 kernel: acpiphp: Slot [22] registered Feb 8 23:21:32.801461 kernel: acpiphp: Slot [23] registered Feb 8 23:21:32.801469 kernel: acpiphp: Slot [24] registered Feb 8 23:21:32.801476 kernel: acpiphp: Slot [25] registered Feb 8 23:21:32.801482 kernel: acpiphp: Slot [26] registered Feb 8 23:21:32.801489 kernel: acpiphp: Slot [27] registered Feb 8 23:21:32.801495 kernel: acpiphp: Slot [28] registered Feb 8 23:21:32.801502 kernel: acpiphp: Slot [29] registered Feb 8 23:21:32.801509 kernel: acpiphp: Slot [30] registered Feb 8 23:21:32.801515 kernel: acpiphp: Slot [31] registered Feb 8 23:21:32.801522 kernel: PCI host bridge to bus 0000:00 Feb 8 23:21:32.801597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:21:32.801661 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:21:32.801720 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:21:32.801803 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 8 23:21:32.801863 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:21:32.801925 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:21:32.802012 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:21:32.802777 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:21:32.802862 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:21:32.802931 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 8 23:21:32.802999 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:21:32.803107 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:21:32.803176 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:21:32.803257 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:21:32.803337 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:21:32.803405 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:21:32.803471 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:21:32.803545 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 8 23:21:32.807157 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 8 23:21:32.807301 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 8 23:21:32.807380 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 8 23:21:32.807476 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:21:32.807563 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:21:32.807636 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 8 23:21:32.807707 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 8 23:21:32.807795 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 8 23:21:32.807885 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:21:32.807957 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:21:32.808039 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 8 23:21:32.808119 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 8 23:21:32.808231 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:21:32.808324 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 8 23:21:32.808404 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 8 23:21:32.808481 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 8 23:21:32.808557 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 8 23:21:32.808566 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:21:32.808581 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:21:32.808593 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:21:32.808600 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:21:32.808607 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:21:32.808614 kernel: iommu: Default domain type: Translated Feb 8 23:21:32.808621 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:21:32.808701 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:21:32.808785 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:21:32.808867 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:21:32.808877 kernel: vgaarb: loaded Feb 8 23:21:32.808895 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:21:32.808903 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:21:32.808909 kernel: PTP clock support registered Feb 8 23:21:32.808916 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:21:32.808923 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:21:32.808933 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:21:32.808940 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 8 23:21:32.808947 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 8 23:21:32.808954 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 8 23:21:32.808960 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:21:32.808985 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:21:32.808992 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:21:32.808999 kernel: pnp: PnP ACPI init Feb 8 23:21:32.809093 kernel: pnp 00:02: [dma 2] Feb 8 23:21:32.809110 kernel: pnp: PnP ACPI: found 6 devices Feb 8 23:21:32.809117 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:21:32.809124 kernel: NET: Registered PF_INET protocol family Feb 8 23:21:32.809131 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:21:32.809138 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 8 23:21:32.809155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:21:32.809163 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:21:32.809170 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 8 23:21:32.809178 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 8 23:21:32.809185 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:21:32.809198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:21:32.809215 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:21:32.809222 kernel: NET: Registered PF_XDP protocol family Feb 8 23:21:32.809309 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:21:32.809392 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:21:32.809489 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:21:32.809573 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 8 23:21:32.809650 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:21:32.809743 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:21:32.809851 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:21:32.809920 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:21:32.809928 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:21:32.809935 kernel: Initialise system trusted keyrings Feb 8 23:21:32.809942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 8 23:21:32.809951 kernel: Key type asymmetric registered Feb 8 23:21:32.809961 kernel: Asymmetric key parser 'x509' registered Feb 8 23:21:32.809969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:21:32.809977 kernel: io scheduler mq-deadline registered Feb 8 23:21:32.809984 kernel: io scheduler kyber registered Feb 8 23:21:32.809991 kernel: io scheduler bfq registered Feb 8 23:21:32.809998 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:21:32.810005 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:21:32.810012 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 8 23:21:32.810018 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:21:32.810026 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:21:32.810033 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:21:32.810040 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:21:32.810047 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:21:32.810054 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:21:32.810123 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 8 23:21:32.810133 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:21:32.810201 kernel: rtc_cmos 00:05: registered as rtc0 Feb 8 23:21:32.810267 kernel: rtc_cmos 00:05: setting system clock to 2024-02-08T23:21:32 UTC (1707434492) Feb 8 23:21:32.810328 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 8 23:21:32.810337 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:21:32.810345 kernel: Segment Routing with IPv6 Feb 8 23:21:32.810351 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:21:32.810358 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:21:32.810365 kernel: Key type dns_resolver registered Feb 8 23:21:32.810372 kernel: IPI shorthand broadcast: enabled Feb 8 23:21:32.810378 kernel: sched_clock: Marking stable (347087058, 72438191)->(452706604, -33181355) Feb 8 23:21:32.810387 kernel: registered taskstats version 1 Feb 8 23:21:32.810394 kernel: Loading compiled-in X.509 certificates Feb 8 23:21:32.810401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:21:32.810408 kernel: Key type .fscrypt registered Feb 8 23:21:32.810414 kernel: Key type fscrypt-provisioning registered Feb 8 23:21:32.810421 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:21:32.810428 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:21:32.810434 kernel: ima: No architecture policies found Feb 8 23:21:32.810442 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:21:32.810449 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:21:32.810456 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:21:32.810463 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:21:32.810470 kernel: Run /init as init process Feb 8 23:21:32.810477 kernel: with arguments: Feb 8 23:21:32.810483 kernel: /init Feb 8 23:21:32.810490 kernel: with environment: Feb 8 23:21:32.810507 kernel: HOME=/ Feb 8 23:21:32.810515 kernel: TERM=linux Feb 8 23:21:32.810523 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:21:32.810532 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:21:32.810541 systemd[1]: Detected virtualization kvm. Feb 8 23:21:32.810549 systemd[1]: Detected architecture x86-64. Feb 8 23:21:32.810556 systemd[1]: Running in initrd. Feb 8 23:21:32.810563 systemd[1]: No hostname configured, using default hostname. Feb 8 23:21:32.810570 systemd[1]: Hostname set to . Feb 8 23:21:32.810579 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:21:32.810587 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:21:32.810594 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:21:32.810601 systemd[1]: Reached target cryptsetup.target. Feb 8 23:21:32.810609 systemd[1]: Reached target paths.target. Feb 8 23:21:32.810616 systemd[1]: Reached target slices.target. Feb 8 23:21:32.810623 systemd[1]: Reached target swap.target. Feb 8 23:21:32.810631 systemd[1]: Reached target timers.target. Feb 8 23:21:32.810640 systemd[1]: Listening on iscsid.socket. Feb 8 23:21:32.810647 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:21:32.810655 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:21:32.810662 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:21:32.810669 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:21:32.810677 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:21:32.810684 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:21:32.810692 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:21:32.810700 systemd[1]: Reached target sockets.target. Feb 8 23:21:32.810708 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:21:32.810716 systemd[1]: Finished network-cleanup.service. Feb 8 23:21:32.810723 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:21:32.810731 systemd[1]: Starting systemd-journald.service... Feb 8 23:21:32.810738 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:21:32.810747 systemd[1]: Starting systemd-resolved.service... Feb 8 23:21:32.810755 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:21:32.810779 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:21:32.810787 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:21:32.810794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:21:32.810802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:21:32.810812 systemd-journald[198]: Journal started Feb 8 23:21:32.810849 systemd-journald[198]: Runtime Journal (/run/log/journal/36d2b7d0319a457e87a0b29f61f151ef) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:21:32.805435 systemd-modules-load[199]: Inserted module 'overlay' Feb 8 23:21:32.830018 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:21:32.830043 kernel: Bridge firewalling registered Feb 8 23:21:32.817502 systemd-resolved[200]: Positive Trust Anchors: Feb 8 23:21:32.833295 systemd[1]: Started systemd-journald.service. Feb 8 23:21:32.833310 kernel: audit: type=1130 audit(1707434492.829:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.817516 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:21:32.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.817550 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:21:32.850642 kernel: audit: type=1130 audit(1707434492.833:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.820104 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 8 23:21:32.852312 kernel: SCSI subsystem initialized Feb 8 23:21:32.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.829981 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 8 23:21:32.854836 kernel: audit: type=1130 audit(1707434492.851:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.834872 systemd[1]: Started systemd-resolved.service. Feb 8 23:21:32.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.852482 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:21:32.856050 systemd[1]: Reached target nss-lookup.target. Feb 8 23:21:32.859439 kernel: audit: type=1130 audit(1707434492.855:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.860014 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:21:32.865802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:21:32.865869 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:21:32.865880 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:21:32.869305 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 8 23:21:32.870227 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:21:32.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.871329 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:21:32.897626 kernel: audit: type=1130 audit(1707434492.869:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.901330 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:21:32.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.902542 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:21:32.905515 kernel: audit: type=1130 audit(1707434492.900:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.907080 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:21:32.910114 kernel: audit: type=1130 audit(1707434492.906:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:32.913877 dracut-cmdline[222]: dracut-dracut-053 Feb 8 23:21:32.916193 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:21:32.969787 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:21:32.979785 kernel: iscsi: registered transport (tcp) Feb 8 23:21:32.998793 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:21:32.998812 kernel: QLogic iSCSI HBA Driver Feb 8 23:21:33.026568 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:21:33.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.027681 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:21:33.030640 kernel: audit: type=1130 audit(1707434493.025:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.072784 kernel: raid6: avx2x4 gen() 29923 MB/s Feb 8 23:21:33.089779 kernel: raid6: avx2x4 xor() 7491 MB/s Feb 8 23:21:33.106781 kernel: raid6: avx2x2 gen() 31601 MB/s Feb 8 23:21:33.123777 kernel: raid6: avx2x2 xor() 18846 MB/s Feb 8 23:21:33.140779 kernel: raid6: avx2x1 gen() 26257 MB/s Feb 8 23:21:33.157779 kernel: raid6: avx2x1 xor() 15090 MB/s Feb 8 23:21:33.174775 kernel: raid6: sse2x4 gen() 14564 MB/s Feb 8 23:21:33.191775 kernel: raid6: sse2x4 xor() 7395 MB/s Feb 8 23:21:33.208782 kernel: raid6: sse2x2 gen() 16234 MB/s Feb 8 23:21:33.225779 kernel: raid6: sse2x2 xor() 9626 MB/s Feb 8 23:21:33.242777 kernel: raid6: sse2x1 gen() 12333 MB/s Feb 8 23:21:33.260227 kernel: raid6: sse2x1 xor() 7632 MB/s Feb 8 23:21:33.260248 kernel: raid6: using algorithm avx2x2 gen() 31601 MB/s Feb 8 23:21:33.260260 kernel: raid6: .... xor() 18846 MB/s, rmw enabled Feb 8 23:21:33.260271 kernel: raid6: using avx2x2 recovery algorithm Feb 8 23:21:33.271784 kernel: xor: automatically using best checksumming function avx Feb 8 23:21:33.359788 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:21:33.367618 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:21:33.370828 kernel: audit: type=1130 audit(1707434493.367:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.369000 audit: BPF prog-id=7 op=LOAD Feb 8 23:21:33.370000 audit: BPF prog-id=8 op=LOAD Feb 8 23:21:33.371185 systemd[1]: Starting systemd-udevd.service... Feb 8 23:21:33.382038 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 8 23:21:33.385379 systemd[1]: Started systemd-udevd.service. Feb 8 23:21:33.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.388919 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:21:33.398471 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Feb 8 23:21:33.421844 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:21:33.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.422986 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:21:33.455000 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:21:33.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:33.483791 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:21:33.497020 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:21:33.497047 kernel: AES CTR mode by8 optimization enabled Feb 8 23:21:33.497056 kernel: libata version 3.00 loaded. Feb 8 23:21:33.500794 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:21:33.502784 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 8 23:21:33.503084 kernel: scsi host0: ata_piix Feb 8 23:21:33.504962 kernel: scsi host1: ata_piix Feb 8 23:21:33.505148 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 8 23:21:33.505159 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 8 23:21:33.508675 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:21:33.508696 kernel: GPT:9289727 != 19775487 Feb 8 23:21:33.508705 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:21:33.508714 kernel: GPT:9289727 != 19775487 Feb 8 23:21:33.508728 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:21:33.508737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:21:33.663796 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 8 23:21:33.663875 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 8 23:21:33.678785 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Feb 8 23:21:33.679490 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:21:33.679968 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:21:33.688025 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:21:33.702506 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:21:33.704926 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 8 23:21:33.705089 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:21:33.707010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:21:33.708305 systemd[1]: Starting disk-uuid.service... Feb 8 23:21:33.732796 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 8 23:21:33.896004 disk-uuid[530]: Primary Header is updated. Feb 8 23:21:33.896004 disk-uuid[530]: Secondary Entries is updated. Feb 8 23:21:33.896004 disk-uuid[530]: Secondary Header is updated. Feb 8 23:21:33.899785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:21:33.902780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:21:34.909795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:21:34.910158 disk-uuid[534]: The operation has completed successfully. Feb 8 23:21:34.931426 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:21:34.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:34.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:34.931505 systemd[1]: Finished disk-uuid.service. Feb 8 23:21:34.935669 systemd[1]: Starting verity-setup.service... Feb 8 23:21:34.947779 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 8 23:21:34.964794 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:21:34.967029 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:21:34.969806 systemd[1]: Finished verity-setup.service. Feb 8 23:21:34.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.026791 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:21:35.027038 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:21:35.027474 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:21:35.028054 systemd[1]: Starting ignition-setup.service... Feb 8 23:21:35.028943 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:21:35.039046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:21:35.039087 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:21:35.039102 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:21:35.046806 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:21:35.090972 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:21:35.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.098000 audit: BPF prog-id=9 op=LOAD Feb 8 23:21:35.100040 systemd[1]: Starting systemd-networkd.service... Feb 8 23:21:35.118450 systemd-networkd[703]: lo: Link UP Feb 8 23:21:35.118460 systemd-networkd[703]: lo: Gained carrier Feb 8 23:21:35.118850 systemd-networkd[703]: Enumeration completed Feb 8 23:21:35.119026 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:21:35.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.136865 systemd-networkd[703]: eth0: Link UP Feb 8 23:21:35.136868 systemd-networkd[703]: eth0: Gained carrier Feb 8 23:21:35.136961 systemd[1]: Started systemd-networkd.service. Feb 8 23:21:35.139537 systemd[1]: Reached target network.target. Feb 8 23:21:35.142543 systemd[1]: Starting iscsiuio.service... Feb 8 23:21:35.146707 systemd[1]: Started iscsiuio.service. Feb 8 23:21:35.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.148267 systemd[1]: Starting iscsid.service... Feb 8 23:21:35.167213 iscsid[708]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:21:35.167213 iscsid[708]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:21:35.167213 iscsid[708]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:21:35.167213 iscsid[708]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:21:35.167213 iscsid[708]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:21:35.167213 iscsid[708]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:21:35.167213 iscsid[708]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:21:35.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.167243 systemd[1]: Started iscsid.service. Feb 8 23:21:35.168827 systemd-networkd[703]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:21:35.177168 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:21:35.179025 systemd[1]: Finished ignition-setup.service. Feb 8 23:21:35.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.180707 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:21:35.187834 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:21:35.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.189206 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:21:35.190551 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:21:35.191902 systemd[1]: Reached target remote-fs.target. Feb 8 23:21:35.193632 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:21:35.200608 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:21:35.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.214698 ignition[711]: Ignition 2.14.0 Feb 8 23:21:35.214709 ignition[711]: Stage: fetch-offline Feb 8 23:21:35.214784 ignition[711]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:35.214794 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:35.214873 ignition[711]: parsed url from cmdline: "" Feb 8 23:21:35.214876 ignition[711]: no config URL provided Feb 8 23:21:35.214880 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:21:35.214886 ignition[711]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:21:35.214902 ignition[711]: op(1): [started] loading QEMU firmware config module Feb 8 23:21:35.214906 ignition[711]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 8 23:21:35.221931 ignition[711]: op(1): [finished] loading QEMU firmware config module Feb 8 23:21:35.276220 ignition[711]: parsing config with SHA512: 81ab10083d20dd0bb167af26f94fa1e7ce30759bf26ff15be180038ebe38ed19e93d54e7bfed6f4c8bb7ef5793fe390f3a5d963ed136abc7594deca3f638bba5 Feb 8 23:21:35.307969 unknown[711]: fetched base config from "system" Feb 8 23:21:35.308815 unknown[711]: fetched user config from "qemu" Feb 8 23:21:35.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.309390 ignition[711]: fetch-offline: fetch-offline passed Feb 8 23:21:35.312643 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:21:35.309459 ignition[711]: Ignition finished successfully Feb 8 23:21:35.324455 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.76 Feb 8 23:21:35.324467 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 8 23:21:35.324631 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 8 23:21:35.325372 systemd[1]: Starting ignition-kargs.service... Feb 8 23:21:35.333991 ignition[732]: Ignition 2.14.0 Feb 8 23:21:35.334001 ignition[732]: Stage: kargs Feb 8 23:21:35.334089 ignition[732]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:35.334098 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:35.335198 ignition[732]: kargs: kargs passed Feb 8 23:21:35.336513 systemd[1]: Finished ignition-kargs.service. Feb 8 23:21:35.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.335229 ignition[732]: Ignition finished successfully Feb 8 23:21:35.338064 systemd[1]: Starting ignition-disks.service... Feb 8 23:21:35.344063 ignition[739]: Ignition 2.14.0 Feb 8 23:21:35.344072 ignition[739]: Stage: disks Feb 8 23:21:35.344155 ignition[739]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:35.344164 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:35.346323 systemd[1]: Finished ignition-disks.service. Feb 8 23:21:35.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.345260 ignition[739]: disks: disks passed Feb 8 23:21:35.347639 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:21:35.345292 ignition[739]: Ignition finished successfully Feb 8 23:21:35.348721 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:21:35.349363 systemd[1]: Reached target local-fs.target. Feb 8 23:21:35.350342 systemd[1]: Reached target sysinit.target. Feb 8 23:21:35.350631 systemd[1]: Reached target basic.target. Feb 8 23:21:35.351584 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:21:35.367808 systemd-fsck[747]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 8 23:21:35.499226 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:21:35.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.511212 systemd[1]: Mounting sysroot.mount... Feb 8 23:21:35.528712 systemd[1]: Mounted sysroot.mount. Feb 8 23:21:35.529086 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:21:35.530772 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:21:35.530974 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:21:35.531541 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:21:35.531577 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:21:35.531597 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:21:35.537531 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:21:35.538512 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:21:35.543363 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:21:35.546946 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:21:35.549825 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:21:35.553241 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:21:35.573630 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:21:35.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.574837 systemd[1]: Starting ignition-mount.service... Feb 8 23:21:35.575847 systemd[1]: Starting sysroot-boot.service... Feb 8 23:21:35.580224 bash[798]: umount: /sysroot/usr/share/oem: not mounted. Feb 8 23:21:35.587843 ignition[800]: INFO : Ignition 2.14.0 Feb 8 23:21:35.587843 ignition[800]: INFO : Stage: mount Feb 8 23:21:35.588953 ignition[800]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:35.588953 ignition[800]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:35.591113 systemd[1]: Finished sysroot-boot.service. Feb 8 23:21:35.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.592370 ignition[800]: INFO : mount: mount passed Feb 8 23:21:35.592940 ignition[800]: INFO : Ignition finished successfully Feb 8 23:21:35.593991 systemd[1]: Finished ignition-mount.service. Feb 8 23:21:35.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:35.974650 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:21:35.981404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Feb 8 23:21:35.981432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:21:35.981442 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:21:35.982788 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:21:35.985132 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:21:35.986301 systemd[1]: Starting ignition-files.service... Feb 8 23:21:35.998823 ignition[828]: INFO : Ignition 2.14.0 Feb 8 23:21:35.998823 ignition[828]: INFO : Stage: files Feb 8 23:21:36.011214 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:36.011214 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:36.013776 ignition[828]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:21:36.014945 ignition[828]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:21:36.014945 ignition[828]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:21:36.016969 ignition[828]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:21:36.016969 ignition[828]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:21:36.016969 ignition[828]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:21:36.016710 unknown[828]: wrote ssh authorized keys file for user: core Feb 8 23:21:36.020557 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:21:36.020557 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:21:36.046820 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:21:36.110479 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:21:36.111810 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:21:36.111810 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:21:36.463652 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:21:36.574192 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:21:36.574192 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:21:36.584606 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:21:36.584606 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:21:36.765938 systemd-networkd[703]: eth0: Gained IPv6LL Feb 8 23:21:36.889403 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:21:36.964022 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:21:36.966126 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:21:36.966126 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:21:36.966126 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:21:36.966126 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:21:36.966126 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:21:37.040086 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:21:37.314422 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:21:37.314422 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:21:37.319448 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:21:37.319448 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:21:37.363129 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:21:37.549819 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:21:37.551892 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:21:37.551892 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:21:37.551892 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:21:37.594609 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:21:38.096931 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:21:38.099756 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:21:38.099756 ignition[828]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:21:38.099756 ignition[828]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:21:38.099756 ignition[828]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:21:38.099756 ignition[828]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:21:38.134391 ignition[828]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Feb 8 23:21:38.171247 ignition[828]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:21:38.177134 ignition[828]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:21:38.177134 ignition[828]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Feb 8 23:21:38.179127 ignition[828]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:21:38.180280 ignition[828]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:21:38.181411 ignition[828]: INFO : files: files passed Feb 8 23:21:38.181925 ignition[828]: INFO : Ignition finished successfully Feb 8 23:21:38.183446 systemd[1]: Finished ignition-files.service. Feb 8 23:21:38.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.184553 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:21:38.188481 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 8 23:21:38.188500 kernel: audit: type=1130 audit(1707434498.182:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.187589 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:21:38.188122 systemd[1]: Starting ignition-quench.service... Feb 8 23:21:38.190229 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:21:38.195685 kernel: audit: type=1130 audit(1707434498.189:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.195699 kernel: audit: type=1131 audit(1707434498.189:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.190292 systemd[1]: Finished ignition-quench.service. Feb 8 23:21:38.196903 initrd-setup-root-after-ignition[854]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 8 23:21:38.199009 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:21:38.199545 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:21:38.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.220291 systemd[1]: Reached target ignition-complete.target. Feb 8 23:21:38.225144 kernel: audit: type=1130 audit(1707434498.219:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.223412 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:21:38.233747 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:21:38.233846 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:21:38.239369 kernel: audit: type=1130 audit(1707434498.233:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.239384 kernel: audit: type=1131 audit(1707434498.233:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.234305 systemd[1]: Reached target initrd-fs.target. Feb 8 23:21:38.240437 systemd[1]: Reached target initrd.target. Feb 8 23:21:38.241472 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:21:38.242941 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:21:38.251979 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:21:38.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.253611 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:21:38.256180 kernel: audit: type=1130 audit(1707434498.253:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.261926 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:21:38.262315 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:21:38.262538 systemd[1]: Stopped target timers.target. Feb 8 23:21:38.262751 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:21:38.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.262854 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:21:38.263186 systemd[1]: Stopped target initrd.target. Feb 8 23:21:38.265486 systemd[1]: Stopped target basic.target. Feb 8 23:21:38.270431 kernel: audit: type=1131 audit(1707434498.261:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.265698 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:21:38.266071 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:21:38.271573 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:21:38.272519 systemd[1]: Stopped target remote-fs.target. Feb 8 23:21:38.272713 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:21:38.273051 systemd[1]: Stopped target sysinit.target. Feb 8 23:21:38.273272 systemd[1]: Stopped target local-fs.target. Feb 8 23:21:38.273487 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:21:38.273704 systemd[1]: Stopped target swap.target. Feb 8 23:21:38.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.274014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:21:38.282749 kernel: audit: type=1131 audit(1707434498.277:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.274102 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:21:38.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.279082 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:21:38.286682 kernel: audit: type=1131 audit(1707434498.282:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.281983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:21:38.282086 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:21:38.283138 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:21:38.283217 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:21:38.285478 systemd[1]: Stopped target paths.target. Feb 8 23:21:38.287002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:21:38.290809 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:21:38.292106 systemd[1]: Stopped target slices.target. Feb 8 23:21:38.293145 systemd[1]: Stopped target sockets.target. Feb 8 23:21:38.294203 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:21:38.294803 systemd[1]: Closed iscsid.socket. Feb 8 23:21:38.295778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:21:38.296587 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:21:38.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.298020 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:21:38.298736 systemd[1]: Stopped ignition-files.service. Feb 8 23:21:38.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.304821 systemd[1]: Stopping ignition-mount.service... Feb 8 23:21:38.306023 systemd[1]: Stopping iscsiuio.service... Feb 8 23:21:38.307924 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:21:38.309181 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:21:38.310140 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:21:38.311651 ignition[869]: INFO : Ignition 2.14.0 Feb 8 23:21:38.311651 ignition[869]: INFO : Stage: umount Feb 8 23:21:38.311651 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:21:38.311651 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:21:38.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.313177 ignition[869]: INFO : umount: umount passed Feb 8 23:21:38.313177 ignition[869]: INFO : Ignition finished successfully Feb 8 23:21:38.311753 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:21:38.312503 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:21:38.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.320632 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:21:38.321441 systemd[1]: Stopped iscsiuio.service. Feb 8 23:21:38.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.323187 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:21:38.324025 systemd[1]: Stopped ignition-mount.service. Feb 8 23:21:38.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.326455 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:21:38.327656 systemd[1]: Stopped target network.target. Feb 8 23:21:38.328923 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:21:38.328965 systemd[1]: Closed iscsiuio.socket. Feb 8 23:21:38.330635 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:21:38.349383 systemd[1]: Stopped ignition-disks.service. Feb 8 23:21:38.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.350566 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:21:38.350605 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:21:38.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.352301 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:21:38.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.352341 systemd[1]: Stopped ignition-setup.service. Feb 8 23:21:38.354381 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:21:38.355623 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:21:38.357049 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:21:38.357782 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:21:38.358806 systemd-networkd[703]: eth0: DHCPv6 lease lost Feb 8 23:21:38.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.359752 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:21:38.360583 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:21:38.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.361951 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:21:38.362761 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:21:38.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.365294 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:21:38.366066 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:21:38.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.367847 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:21:38.367885 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:21:38.369000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:21:38.369729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:21:38.369780 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:21:38.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.372459 systemd[1]: Stopping network-cleanup.service... Feb 8 23:21:38.372000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:21:38.373722 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:21:38.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.373786 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:21:38.375249 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:21:38.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.375292 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:21:38.377895 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:21:38.377941 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:21:38.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.379803 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:21:38.382079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:21:38.384614 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:21:38.384712 systemd[1]: Stopped network-cleanup.service. Feb 8 23:21:38.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.390250 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:21:38.390363 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:21:38.390903 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:21:38.390932 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:21:38.391437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:21:38.391466 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:21:38.391657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:21:38.391686 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:21:38.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.391937 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:21:38.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.391963 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:21:38.392192 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:21:38.392219 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:21:38.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:38.393041 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:21:38.399071 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:21:38.399139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:21:38.400341 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:21:38.400374 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:21:38.401005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:21:38.401034 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:21:38.402977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:21:38.403316 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:21:38.403381 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:21:38.404327 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:21:38.414000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:21:38.414000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:21:38.414000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:21:38.405948 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:21:38.414000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:21:38.414000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:21:38.411904 systemd[1]: Switching root. Feb 8 23:21:38.433050 iscsid[708]: iscsid shutting down. Feb 8 23:21:38.433567 systemd-journald[198]: Journal stopped Feb 8 23:21:41.655919 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 8 23:21:41.656002 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:21:41.656021 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:21:41.656035 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:21:41.656060 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:21:41.656080 kernel: SELinux: policy capability open_perms=1 Feb 8 23:21:41.656094 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:21:41.656110 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:21:41.656124 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:21:41.656138 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:21:41.656152 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:21:41.656165 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:21:41.656178 systemd[1]: Successfully loaded SELinux policy in 36.194ms. Feb 8 23:21:41.656209 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.593ms. Feb 8 23:21:41.656225 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:21:41.656240 systemd[1]: Detected virtualization kvm. Feb 8 23:21:41.656254 systemd[1]: Detected architecture x86-64. Feb 8 23:21:41.656269 systemd[1]: Detected first boot. Feb 8 23:21:41.656283 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:21:41.656295 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:21:41.656309 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:21:41.656324 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:21:41.656340 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:21:41.656359 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:21:41.656376 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:21:41.656390 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:21:41.656405 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:21:41.656419 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:21:41.656434 systemd[1]: Created slice system-getty.slice. Feb 8 23:21:41.656449 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:21:41.656463 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:21:41.656487 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:21:41.656502 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:21:41.656522 systemd[1]: Created slice user.slice. Feb 8 23:21:41.656537 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:21:41.656551 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:21:41.656565 systemd[1]: Set up automount boot.automount. Feb 8 23:21:41.656579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:21:41.656593 systemd[1]: Reached target integritysetup.target. Feb 8 23:21:41.656606 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:21:41.656625 systemd[1]: Reached target remote-fs.target. Feb 8 23:21:41.656639 systemd[1]: Reached target slices.target. Feb 8 23:21:41.656652 systemd[1]: Reached target swap.target. Feb 8 23:21:41.656666 systemd[1]: Reached target torcx.target. Feb 8 23:21:41.656680 systemd[1]: Reached target veritysetup.target. Feb 8 23:21:41.656694 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:21:41.656708 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:21:41.656721 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:21:41.656737 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:21:41.656751 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:21:41.656795 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:21:41.656810 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:21:41.656821 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:21:41.656832 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:21:41.656841 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:21:41.656852 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:21:41.656862 systemd[1]: Mounting media.mount... Feb 8 23:21:41.656872 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:21:41.656883 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:21:41.656893 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:21:41.656903 systemd[1]: Mounting tmp.mount... Feb 8 23:21:41.656914 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:21:41.656924 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:21:41.656934 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:21:41.656944 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:21:41.656954 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:21:41.656964 systemd[1]: Starting modprobe@drm.service... Feb 8 23:21:41.656975 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:21:41.656992 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:21:41.657002 systemd[1]: Starting modprobe@loop.service... Feb 8 23:21:41.657012 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:21:41.657023 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 8 23:21:41.657033 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 8 23:21:41.657042 systemd[1]: Starting systemd-journald.service... Feb 8 23:21:41.657053 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:21:41.657062 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:21:41.657073 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:21:41.657083 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:21:41.657093 kernel: loop: module loaded Feb 8 23:21:41.657103 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:21:41.657114 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:21:41.657124 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:21:41.657134 systemd[1]: Mounted media.mount. Feb 8 23:21:41.657144 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:21:41.657154 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:21:41.657165 systemd[1]: Mounted tmp.mount. Feb 8 23:21:41.657174 kernel: fuse: init (API version 7.34) Feb 8 23:21:41.657184 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:21:41.657194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:21:41.657204 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:21:41.657213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:21:41.657223 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:21:41.657233 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:21:41.657243 systemd[1]: Finished modprobe@drm.service. Feb 8 23:21:41.657257 systemd-journald[1008]: Journal started Feb 8 23:21:41.657297 systemd-journald[1008]: Runtime Journal (/run/log/journal/36d2b7d0319a457e87a0b29f61f151ef) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:21:41.580000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:21:41.580000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:21:41.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.652000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:21:41.652000 audit[1008]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe27144890 a2=4000 a3=7ffe2714492c items=0 ppid=1 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:41.652000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:21:41.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.658187 systemd[1]: Started systemd-journald.service. Feb 8 23:21:41.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.659622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:21:41.659988 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:21:41.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.660966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:21:41.661195 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:21:41.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.661977 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:21:41.662200 systemd[1]: Finished modprobe@loop.service. Feb 8 23:21:41.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.663296 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:21:41.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.664318 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:21:41.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.665324 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:21:41.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.666426 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:21:41.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.667370 systemd[1]: Reached target network-pre.target. Feb 8 23:21:41.669237 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:21:41.671088 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:21:41.671698 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:21:41.673545 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:21:41.675753 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:21:41.676574 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:21:41.678174 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:21:41.679887 systemd-journald[1008]: Time spent on flushing to /var/log/journal/36d2b7d0319a457e87a0b29f61f151ef is 15.223ms for 1065 entries. Feb 8 23:21:41.679887 systemd-journald[1008]: System Journal (/var/log/journal/36d2b7d0319a457e87a0b29f61f151ef) is 8.0M, max 195.6M, 187.6M free. Feb 8 23:21:41.705952 systemd-journald[1008]: Received client request to flush runtime journal. Feb 8 23:21:41.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.678917 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:21:41.680271 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:21:41.683052 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:21:41.688487 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:21:41.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.689388 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:21:41.691189 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:21:41.692005 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:21:41.696503 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:21:41.699538 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:21:41.701720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:21:41.707401 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:21:41.709142 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:21:41.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:41.710871 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:21:41.717049 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:21:41.720116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:21:41.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.100852 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:21:42.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.102608 systemd[1]: Starting systemd-udevd.service... Feb 8 23:21:42.117162 systemd-udevd[1063]: Using default interface naming scheme 'v252'. Feb 8 23:21:42.129000 systemd[1]: Started systemd-udevd.service. Feb 8 23:21:42.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.130757 systemd[1]: Starting systemd-networkd.service... Feb 8 23:21:42.142740 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:21:42.154003 systemd[1]: Found device dev-ttyS0.device. Feb 8 23:21:42.172841 systemd[1]: Started systemd-userdbd.service. Feb 8 23:21:42.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.187786 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:21:42.190250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:21:42.198793 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:21:42.198000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:21:42.198000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559a8c502990 a1=32194 a2=7fc38941bbc5 a3=5 items=108 ppid=1063 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:42.198000 audit: CWD cwd="/" Feb 8 23:21:42.198000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=1 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=2 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=3 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=4 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=5 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=6 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=7 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=8 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=9 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=10 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=11 name=(null) inode=14789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=12 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=13 name=(null) inode=14790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=14 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=15 name=(null) inode=14791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=16 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=17 name=(null) inode=14792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=18 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=19 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=20 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=21 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=22 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=23 name=(null) inode=14795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=24 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=25 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=26 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=27 name=(null) inode=14797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=28 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=29 name=(null) inode=14798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=30 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=31 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=32 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=33 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=34 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=35 name=(null) inode=14801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=36 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=37 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=38 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=39 name=(null) inode=14803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=40 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=41 name=(null) inode=14804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=42 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=43 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=44 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=45 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=46 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=47 name=(null) inode=14807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=48 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=49 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=50 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=51 name=(null) inode=14809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=52 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=53 name=(null) inode=14810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=55 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=56 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=57 name=(null) inode=14812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=58 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=59 name=(null) inode=14813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=60 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=61 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=62 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=63 name=(null) inode=14815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=64 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=65 name=(null) inode=14816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=66 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=67 name=(null) inode=14817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=68 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=69 name=(null) inode=14818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=70 name=(null) inode=14814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=71 name=(null) inode=14819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=72 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=73 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=74 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=75 name=(null) inode=14821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=76 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=77 name=(null) inode=14822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=78 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=79 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=80 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=81 name=(null) inode=14824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=82 name=(null) inode=14820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=83 name=(null) inode=14825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=84 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=85 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=86 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=87 name=(null) inode=14827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=88 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=89 name=(null) inode=14828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=90 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=91 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=92 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=93 name=(null) inode=14830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=94 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=95 name=(null) inode=14831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=96 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=97 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=98 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=99 name=(null) inode=14833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=100 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=101 name=(null) inode=14834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=102 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=103 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=104 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=105 name=(null) inode=14836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=106 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PATH item=107 name=(null) inode=14837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:21:42.198000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:21:42.238994 systemd-networkd[1070]: lo: Link UP Feb 8 23:21:42.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.239001 systemd-networkd[1070]: lo: Gained carrier Feb 8 23:21:42.239328 systemd-networkd[1070]: Enumeration completed Feb 8 23:21:42.239415 systemd-networkd[1070]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:21:42.239432 systemd[1]: Started systemd-networkd.service. Feb 8 23:21:42.241613 systemd-networkd[1070]: eth0: Link UP Feb 8 23:21:42.241618 systemd-networkd[1070]: eth0: Gained carrier Feb 8 23:21:42.242800 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:21:42.249900 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:21:42.253875 systemd-networkd[1070]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:21:42.256825 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:21:42.285000 kernel: kvm: Nested Virtualization enabled Feb 8 23:21:42.285100 kernel: SVM: kvm: Nested Paging enabled Feb 8 23:21:42.286013 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 8 23:21:42.286043 kernel: SVM: Virtual GIF supported Feb 8 23:21:42.301783 kernel: EDAC MC: Ver: 3.0.0 Feb 8 23:21:42.319141 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:21:42.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.321031 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:21:42.327227 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:21:42.351395 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:21:42.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.352178 systemd[1]: Reached target cryptsetup.target. Feb 8 23:21:42.353747 systemd[1]: Starting lvm2-activation.service... Feb 8 23:21:42.357276 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:21:42.388427 systemd[1]: Finished lvm2-activation.service. Feb 8 23:21:42.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.389130 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:21:42.389750 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:21:42.389779 systemd[1]: Reached target local-fs.target. Feb 8 23:21:42.390364 systemd[1]: Reached target machines.target. Feb 8 23:21:42.391854 systemd[1]: Starting ldconfig.service... Feb 8 23:21:42.392590 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:21:42.392620 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:21:42.393488 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:21:42.394929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:21:42.396566 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:21:42.397291 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:21:42.397332 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:21:42.398111 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:21:42.399188 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) Feb 8 23:21:42.401398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:21:42.402508 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:21:42.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.406430 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:21:42.406938 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:21:42.407976 systemd-tmpfiles[1108]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:21:42.431538 systemd-fsck[1114]: fsck.fat 4.2 (2021-01-31) Feb 8 23:21:42.431538 systemd-fsck[1114]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:21:42.432669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:21:42.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.434915 systemd[1]: Mounting boot.mount... Feb 8 23:21:42.445550 systemd[1]: Mounted boot.mount. Feb 8 23:21:42.456745 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:21:42.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.627497 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:21:42.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.630113 systemd[1]: Starting audit-rules.service... Feb 8 23:21:42.631496 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:21:42.633029 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:21:42.635169 systemd[1]: Starting systemd-resolved.service... Feb 8 23:21:42.637279 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:21:42.641325 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:21:42.642000 audit[1128]: SYSTEM_BOOT pid=1128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.643095 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:21:42.643822 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:21:42.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.647092 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:21:42.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.651690 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:21:42.652794 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:21:42.659749 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:21:42.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:42.668708 augenrules[1145]: No rules Feb 8 23:21:42.667000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:21:42.667000 audit[1145]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2dce55f0 a2=420 a3=0 items=0 ppid=1121 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:42.667000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:21:42.669469 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:21:42.669509 systemd[1]: Finished audit-rules.service. Feb 8 23:21:42.673368 systemd[1]: Finished ldconfig.service. Feb 8 23:21:42.674909 systemd[1]: Starting systemd-update-done.service... Feb 8 23:21:42.680184 systemd[1]: Finished systemd-update-done.service. Feb 8 23:21:42.700732 systemd-resolved[1125]: Positive Trust Anchors: Feb 8 23:21:42.700746 systemd-resolved[1125]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:21:42.700785 systemd-resolved[1125]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:21:42.706651 systemd-resolved[1125]: Defaulting to hostname 'linux'. Feb 8 23:21:42.707945 systemd[1]: Started systemd-resolved.service. Feb 8 23:21:42.708614 systemd[1]: Reached target network.target. Feb 8 23:21:42.709207 systemd[1]: Reached target nss-lookup.target. Feb 8 23:21:42.709987 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:21:42.710782 systemd[1]: Reached target sysinit.target. Feb 8 23:21:42.711022 systemd-timesyncd[1126]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 8 23:21:42.711084 systemd-timesyncd[1126]: Initial clock synchronization to Thu 2024-02-08 23:21:42.841831 UTC. Feb 8 23:21:42.711450 systemd[1]: Started motdgen.path. Feb 8 23:21:42.712034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:21:42.712838 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:21:42.713467 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:21:42.713489 systemd[1]: Reached target paths.target. Feb 8 23:21:42.714063 systemd[1]: Reached target time-set.target. Feb 8 23:21:42.714737 systemd[1]: Started logrotate.timer. Feb 8 23:21:42.715361 systemd[1]: Started mdadm.timer. Feb 8 23:21:42.715884 systemd[1]: Reached target timers.target. Feb 8 23:21:42.716684 systemd[1]: Listening on dbus.socket. Feb 8 23:21:42.718130 systemd[1]: Starting docker.socket... Feb 8 23:21:42.719400 systemd[1]: Listening on sshd.socket. Feb 8 23:21:42.720051 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:21:42.720307 systemd[1]: Listening on docker.socket. Feb 8 23:21:42.720903 systemd[1]: Reached target sockets.target. Feb 8 23:21:42.721514 systemd[1]: Reached target basic.target. Feb 8 23:21:42.722187 systemd[1]: System is tainted: cgroupsv1 Feb 8 23:21:42.722223 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:21:42.722239 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:21:42.723083 systemd[1]: Starting containerd.service... Feb 8 23:21:42.724481 systemd[1]: Starting dbus.service... Feb 8 23:21:42.725841 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:21:42.727473 systemd[1]: Starting extend-filesystems.service... Feb 8 23:21:42.728180 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:21:42.729104 systemd[1]: Starting motdgen.service... Feb 8 23:21:42.729924 jq[1160]: false Feb 8 23:21:42.730374 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:21:42.731819 systemd[1]: Starting prepare-critools.service... Feb 8 23:21:42.734351 systemd[1]: Starting prepare-helm.service... Feb 8 23:21:42.735907 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:21:42.738351 systemd[1]: Starting sshd-keygen.service... Feb 8 23:21:42.743624 extend-filesystems[1161]: Found sr0 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda1 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda2 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda3 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found usr Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda4 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda6 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda7 Feb 8 23:21:42.743624 extend-filesystems[1161]: Found vda9 Feb 8 23:21:42.743624 extend-filesystems[1161]: Checking size of /dev/vda9 Feb 8 23:21:42.742565 systemd[1]: Starting systemd-logind.service... Feb 8 23:21:42.746694 dbus-daemon[1159]: [system] SELinux support is enabled Feb 8 23:21:42.743226 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:21:42.743285 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:21:42.744798 systemd[1]: Starting update-engine.service... Feb 8 23:21:42.746517 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:21:42.748783 systemd[1]: Started dbus.service. Feb 8 23:21:42.752262 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:21:42.752483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:21:42.756318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:21:42.762131 jq[1181]: true Feb 8 23:21:42.757028 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:21:42.763639 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:21:42.763668 systemd[1]: Reached target system-config.target. Feb 8 23:21:42.764357 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:21:42.764374 systemd[1]: Reached target user-config.target. Feb 8 23:21:42.777823 jq[1195]: true Feb 8 23:21:42.777916 tar[1188]: ./ Feb 8 23:21:42.777916 tar[1188]: ./macvlan Feb 8 23:21:42.787987 tar[1192]: linux-amd64/helm Feb 8 23:21:42.788143 tar[1190]: crictl Feb 8 23:21:42.766693 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:21:42.788417 extend-filesystems[1161]: Resized partition /dev/vda9 Feb 8 23:21:42.766898 systemd[1]: Finished motdgen.service. Feb 8 23:21:42.800867 extend-filesystems[1218]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:21:42.804789 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 8 23:21:42.820071 env[1196]: time="2024-02-08T23:21:42.820034161Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:21:42.821962 update_engine[1180]: I0208 23:21:42.820361 1180 main.cc:92] Flatcar Update Engine starting Feb 8 23:21:42.823797 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 8 23:21:42.824695 systemd[1]: Started update-engine.service. Feb 8 23:21:42.826513 update_engine[1180]: I0208 23:21:42.826480 1180 update_check_scheduler.cc:74] Next update check in 3m7s Feb 8 23:21:42.827663 systemd[1]: Started locksmithd.service. Feb 8 23:21:42.840304 env[1196]: time="2024-02-08T23:21:42.837387869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:21:42.840562 systemd-logind[1177]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:21:42.850749 extend-filesystems[1218]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:21:42.850749 extend-filesystems[1218]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 8 23:21:42.850749 extend-filesystems[1218]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 8 23:21:42.854169 tar[1188]: ./static Feb 8 23:21:42.840587 systemd-logind[1177]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:21:42.854231 extend-filesystems[1161]: Resized filesystem in /dev/vda9 Feb 8 23:21:42.854906 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:21:42.840907 systemd-logind[1177]: New seat seat0. Feb 8 23:21:42.855021 env[1196]: time="2024-02-08T23:21:42.854242812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.842337 systemd[1]: Started systemd-logind.service. Feb 8 23:21:42.845783 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:21:42.850055 systemd[1]: Finished extend-filesystems.service. Feb 8 23:21:42.852476 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:21:42.865459 env[1196]: time="2024-02-08T23:21:42.865425089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865459 env[1196]: time="2024-02-08T23:21:42.865457840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865695 env[1196]: time="2024-02-08T23:21:42.865670379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865695 env[1196]: time="2024-02-08T23:21:42.865690136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865783 env[1196]: time="2024-02-08T23:21:42.865701658Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:21:42.865783 env[1196]: time="2024-02-08T23:21:42.865710674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865825 env[1196]: time="2024-02-08T23:21:42.865780716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.865997 env[1196]: time="2024-02-08T23:21:42.865973407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:21:42.866129 env[1196]: time="2024-02-08T23:21:42.866104984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:21:42.866129 env[1196]: time="2024-02-08T23:21:42.866124230Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:21:42.866193 env[1196]: time="2024-02-08T23:21:42.866167671Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:21:42.866193 env[1196]: time="2024-02-08T23:21:42.866177941Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:21:42.879202 tar[1188]: ./vlan Feb 8 23:21:42.882948 env[1196]: time="2024-02-08T23:21:42.882911466Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:21:42.883000 env[1196]: time="2024-02-08T23:21:42.882961700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:21:42.883000 env[1196]: time="2024-02-08T23:21:42.882976237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:21:42.883048 env[1196]: time="2024-02-08T23:21:42.883006063Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883048 env[1196]: time="2024-02-08T23:21:42.883021793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883048 env[1196]: time="2024-02-08T23:21:42.883037192Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883048 env[1196]: time="2024-02-08T23:21:42.883048543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883121 env[1196]: time="2024-02-08T23:21:42.883061738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883121 env[1196]: time="2024-02-08T23:21:42.883074281Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883121 env[1196]: time="2024-02-08T23:21:42.883087075Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883121 env[1196]: time="2024-02-08T23:21:42.883101602Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883195 env[1196]: time="2024-02-08T23:21:42.883122442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:21:42.883248 env[1196]: time="2024-02-08T23:21:42.883224673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:21:42.883329 env[1196]: time="2024-02-08T23:21:42.883300836Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:21:42.883618 env[1196]: time="2024-02-08T23:21:42.883595529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:21:42.883659 env[1196]: time="2024-02-08T23:21:42.883623531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883659 env[1196]: time="2024-02-08T23:21:42.883636275Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:21:42.883706 env[1196]: time="2024-02-08T23:21:42.883674968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883706 env[1196]: time="2024-02-08T23:21:42.883687311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883706 env[1196]: time="2024-02-08T23:21:42.883699343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883711246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883722136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883734529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883745720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883755559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883800 env[1196]: time="2024-02-08T23:21:42.883789823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:21:42.883920 env[1196]: time="2024-02-08T23:21:42.883885713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883920 env[1196]: time="2024-02-08T23:21:42.883900030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883920 env[1196]: time="2024-02-08T23:21:42.883910529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.883920 env[1196]: time="2024-02-08T23:21:42.883920548Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:21:42.884003 env[1196]: time="2024-02-08T23:21:42.883936167Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:21:42.884003 env[1196]: time="2024-02-08T23:21:42.883961014Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:21:42.884003 env[1196]: time="2024-02-08T23:21:42.883977976Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:21:42.884062 env[1196]: time="2024-02-08T23:21:42.884014945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:21:42.884240 env[1196]: time="2024-02-08T23:21:42.884186687Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:21:42.884240 env[1196]: time="2024-02-08T23:21:42.884240598Z" level=info msg="Connect containerd service" Feb 8 23:21:42.884836 env[1196]: time="2024-02-08T23:21:42.884268410Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:21:42.889470 env[1196]: time="2024-02-08T23:21:42.889438384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:21:42.890032 env[1196]: time="2024-02-08T23:21:42.890007601Z" level=info msg="Start subscribing containerd event" Feb 8 23:21:42.890134 env[1196]: time="2024-02-08T23:21:42.890116285Z" level=info msg="Start recovering state" Feb 8 23:21:42.890256 env[1196]: time="2024-02-08T23:21:42.890240478Z" level=info msg="Start event monitor" Feb 8 23:21:42.890333 env[1196]: time="2024-02-08T23:21:42.890312803Z" level=info msg="Start snapshots syncer" Feb 8 23:21:42.890403 env[1196]: time="2024-02-08T23:21:42.890385670Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:21:42.890481 env[1196]: time="2024-02-08T23:21:42.890460240Z" level=info msg="Start streaming server" Feb 8 23:21:42.890748 env[1196]: time="2024-02-08T23:21:42.890729865Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:21:42.890878 env[1196]: time="2024-02-08T23:21:42.890859539Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:21:42.891084 systemd[1]: Started containerd.service. Feb 8 23:21:42.892725 env[1196]: time="2024-02-08T23:21:42.892705710Z" level=info msg="containerd successfully booted in 0.079489s" Feb 8 23:21:42.912541 tar[1188]: ./portmap Feb 8 23:21:42.941202 tar[1188]: ./host-local Feb 8 23:21:42.952011 systemd[1]: Created slice system-sshd.slice. Feb 8 23:21:42.966510 tar[1188]: ./vrf Feb 8 23:21:42.993568 tar[1188]: ./bridge Feb 8 23:21:43.026929 tar[1188]: ./tuning Feb 8 23:21:43.053719 tar[1188]: ./firewall Feb 8 23:21:43.088228 tar[1188]: ./host-device Feb 8 23:21:43.118571 tar[1188]: ./sbr Feb 8 23:21:43.146043 tar[1188]: ./loopback Feb 8 23:21:43.172189 tar[1188]: ./dhcp Feb 8 23:21:43.217244 systemd[1]: Finished prepare-critools.service. Feb 8 23:21:43.226247 tar[1192]: linux-amd64/LICENSE Feb 8 23:21:43.226383 tar[1192]: linux-amd64/README.md Feb 8 23:21:43.227495 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:21:43.231121 systemd[1]: Finished prepare-helm.service. Feb 8 23:21:43.247095 tar[1188]: ./ptp Feb 8 23:21:43.276369 tar[1188]: ./ipvlan Feb 8 23:21:43.303612 tar[1188]: ./bandwidth Feb 8 23:21:43.337938 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:21:43.742958 systemd-networkd[1070]: eth0: Gained IPv6LL Feb 8 23:21:43.834989 sshd_keygen[1191]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:21:43.852835 systemd[1]: Finished sshd-keygen.service. Feb 8 23:21:43.854954 systemd[1]: Starting issuegen.service... Feb 8 23:21:43.856598 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:35306.service. Feb 8 23:21:43.859327 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:21:43.859537 systemd[1]: Finished issuegen.service. Feb 8 23:21:43.861390 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:21:43.865954 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:21:43.867921 systemd[1]: Started getty@tty1.service. Feb 8 23:21:43.869695 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:21:43.870562 systemd[1]: Reached target getty.target. Feb 8 23:21:43.871273 systemd[1]: Reached target multi-user.target. Feb 8 23:21:43.873202 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:21:43.879287 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:21:43.879490 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:21:43.881856 systemd[1]: Startup finished in 6.315s (kernel) + 5.408s (userspace) = 11.724s. Feb 8 23:21:43.895894 sshd[1264]: Accepted publickey for core from 10.0.0.1 port 35306 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:43.897407 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:43.904963 systemd[1]: Created slice user-500.slice. Feb 8 23:21:43.905918 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:21:43.907695 systemd-logind[1177]: New session 1 of user core. Feb 8 23:21:43.914402 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:21:43.915553 systemd[1]: Starting user@500.service... Feb 8 23:21:43.918289 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:43.998267 systemd[1278]: Queued start job for default target default.target. Feb 8 23:21:43.998474 systemd[1278]: Reached target paths.target. Feb 8 23:21:43.998490 systemd[1278]: Reached target sockets.target. Feb 8 23:21:43.998502 systemd[1278]: Reached target timers.target. Feb 8 23:21:43.998513 systemd[1278]: Reached target basic.target. Feb 8 23:21:43.998554 systemd[1278]: Reached target default.target. Feb 8 23:21:43.998574 systemd[1278]: Startup finished in 75ms. Feb 8 23:21:43.998753 systemd[1]: Started user@500.service. Feb 8 23:21:43.999693 systemd[1]: Started session-1.scope. Feb 8 23:21:44.050271 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:35318.service. Feb 8 23:21:44.086551 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 35318 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.087646 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.091307 systemd-logind[1177]: New session 2 of user core. Feb 8 23:21:44.092142 systemd[1]: Started session-2.scope. Feb 8 23:21:44.146789 sshd[1287]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:44.148964 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:35332.service. Feb 8 23:21:44.149859 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:35318.service: Deactivated successfully. Feb 8 23:21:44.150567 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:21:44.150688 systemd-logind[1177]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:21:44.151540 systemd-logind[1177]: Removed session 2. Feb 8 23:21:44.181864 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 35332 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.182913 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.185979 systemd-logind[1177]: New session 3 of user core. Feb 8 23:21:44.186577 systemd[1]: Started session-3.scope. Feb 8 23:21:44.235320 sshd[1292]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:44.237853 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:35348.service. Feb 8 23:21:44.238260 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:35332.service: Deactivated successfully. Feb 8 23:21:44.239026 systemd-logind[1177]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:21:44.239085 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:21:44.240074 systemd-logind[1177]: Removed session 3. Feb 8 23:21:44.269645 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.270336 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.273549 systemd-logind[1177]: New session 4 of user core. Feb 8 23:21:44.274220 systemd[1]: Started session-4.scope. Feb 8 23:21:44.328641 sshd[1300]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:44.330741 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:35362.service. Feb 8 23:21:44.331090 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:35348.service: Deactivated successfully. Feb 8 23:21:44.332112 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:21:44.332627 systemd-logind[1177]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:21:44.333459 systemd-logind[1177]: Removed session 4. Feb 8 23:21:44.361833 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 35362 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.362560 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.365550 systemd-logind[1177]: New session 5 of user core. Feb 8 23:21:44.366154 systemd[1]: Started session-5.scope. Feb 8 23:21:44.420272 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 8 23:21:44.420451 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:21:44.426981 dbus-daemon[1159]: \xd0͗\xe8QV: received setenforce notice (enforcing=1820283472) Feb 8 23:21:44.428970 sudo[1312]: pam_unix(sudo:session): session closed for user root Feb 8 23:21:44.430460 sshd[1306]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:44.432498 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:35370.service. Feb 8 23:21:44.433291 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:35362.service: Deactivated successfully. Feb 8 23:21:44.433985 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:21:44.434405 systemd-logind[1177]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:21:44.435029 systemd-logind[1177]: Removed session 5. Feb 8 23:21:44.464485 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 35370 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.465368 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.468158 systemd-logind[1177]: New session 6 of user core. Feb 8 23:21:44.468783 systemd[1]: Started session-6.scope. Feb 8 23:21:44.521008 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 8 23:21:44.521172 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:21:44.523576 sudo[1321]: pam_unix(sudo:session): session closed for user root Feb 8 23:21:44.527573 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 8 23:21:44.527743 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:21:44.534748 systemd[1]: Stopping audit-rules.service... Feb 8 23:21:44.535000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:21:44.536086 auditctl[1324]: No rules Feb 8 23:21:44.536315 systemd[1]: audit-rules.service: Deactivated successfully. Feb 8 23:21:44.536491 systemd[1]: Stopped audit-rules.service. Feb 8 23:21:44.540604 kernel: kauditd_printk_skb: 207 callbacks suppressed Feb 8 23:21:44.540662 kernel: audit: type=1305 audit(1707434504.535:129): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:21:44.540679 kernel: audit: type=1300 audit(1707434504.535:129): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3a9a9680 a2=420 a3=0 items=0 ppid=1 pid=1324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:44.535000 audit[1324]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3a9a9680 a2=420 a3=0 items=0 ppid=1 pid=1324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:44.537768 systemd[1]: Starting audit-rules.service... Feb 8 23:21:44.541573 kernel: audit: type=1327 audit(1707434504.535:129): proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:21:44.535000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:21:44.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.545023 kernel: audit: type=1131 audit(1707434504.535:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.552223 augenrules[1342]: No rules Feb 8 23:21:44.552959 systemd[1]: Finished audit-rules.service. Feb 8 23:21:44.553770 sudo[1320]: pam_unix(sudo:session): session closed for user root Feb 8 23:21:44.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.554839 sshd[1314]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:44.552000 audit[1320]: USER_END pid=1320 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.558501 kernel: audit: type=1130 audit(1707434504.551:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.558594 kernel: audit: type=1106 audit(1707434504.552:132): pid=1320 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.558614 kernel: audit: type=1104 audit(1707434504.552:133): pid=1320 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.552000 audit[1320]: CRED_DISP pid=1320 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.558690 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:35374.service. Feb 8 23:21:44.559042 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:35370.service: Deactivated successfully. Feb 8 23:21:44.560167 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:21:44.563374 kernel: audit: type=1106 audit(1707434504.555:134): pid=1314 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.555000 audit[1314]: USER_END pid=1314 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.560670 systemd-logind[1177]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:21:44.562290 systemd-logind[1177]: Removed session 6. Feb 8 23:21:44.555000 audit[1314]: CRED_DISP pid=1314 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.565759 kernel: audit: type=1104 audit(1707434504.555:135): pid=1314 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.76:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.568119 kernel: audit: type=1130 audit(1707434504.556:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.76:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.76:22-10.0.0.1:35370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.592000 audit[1349]: USER_ACCT pid=1349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.593800 sshd[1349]: Accepted publickey for core from 10.0.0.1 port 35374 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:21:44.593000 audit[1349]: CRED_ACQ pid=1349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.593000 audit[1349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3857af70 a2=3 a3=0 items=0 ppid=1 pid=1349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:44.593000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:21:44.594654 sshd[1349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:44.597787 systemd-logind[1177]: New session 7 of user core. Feb 8 23:21:44.598524 systemd[1]: Started session-7.scope. Feb 8 23:21:44.601000 audit[1349]: USER_START pid=1349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.602000 audit[1353]: CRED_ACQ pid=1353 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:21:44.648000 audit[1354]: USER_ACCT pid=1354 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.648000 audit[1354]: CRED_REFR pid=1354 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:44.648936 sudo[1354]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:21:44.649094 sudo[1354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:21:44.649000 audit[1354]: USER_START pid=1354 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:21:45.161716 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:21:45.166706 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:21:45.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:45.166966 systemd[1]: Reached target network-online.target. Feb 8 23:21:45.168076 systemd[1]: Starting docker.service... Feb 8 23:21:45.201404 env[1373]: time="2024-02-08T23:21:45.201348142Z" level=info msg="Starting up" Feb 8 23:21:45.202947 env[1373]: time="2024-02-08T23:21:45.202920568Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:21:45.203030 env[1373]: time="2024-02-08T23:21:45.203011856Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:21:45.203116 env[1373]: time="2024-02-08T23:21:45.203095779Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:21:45.203183 env[1373]: time="2024-02-08T23:21:45.203165572Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:21:45.206440 env[1373]: time="2024-02-08T23:21:45.206414647Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:21:45.206440 env[1373]: time="2024-02-08T23:21:45.206432764Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:21:45.206513 env[1373]: time="2024-02-08T23:21:45.206446328Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:21:45.206513 env[1373]: time="2024-02-08T23:21:45.206455274Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:21:45.734857 env[1373]: time="2024-02-08T23:21:45.734806338Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 8 23:21:45.734857 env[1373]: time="2024-02-08T23:21:45.734833159Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 8 23:21:45.735088 env[1373]: time="2024-02-08T23:21:45.734969651Z" level=info msg="Loading containers: start." Feb 8 23:21:45.769000 audit[1404]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.769000 audit[1404]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce26beda0 a2=0 a3=7ffce26bed8c items=0 ppid=1373 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.769000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 8 23:21:45.770000 audit[1406]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.770000 audit[1406]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffed0ca6d10 a2=0 a3=7ffed0ca6cfc items=0 ppid=1373 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.770000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 8 23:21:45.771000 audit[1408]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.771000 audit[1408]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1001dc70 a2=0 a3=7ffe1001dc5c items=0 ppid=1373 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.771000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:21:45.772000 audit[1410]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.772000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4754c760 a2=0 a3=7fff4754c74c items=0 ppid=1373 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:21:45.772000 audit[1412]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.772000 audit[1412]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc63db0bf0 a2=0 a3=7ffc63db0bdc items=0 ppid=1373 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 8 23:21:45.785000 audit[1417]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.785000 audit[1417]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb7f348f0 a2=0 a3=7fffb7f348dc items=0 ppid=1373 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.785000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 8 23:21:45.793000 audit[1419]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.793000 audit[1419]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed61dd0c0 a2=0 a3=7ffed61dd0ac items=0 ppid=1373 pid=1419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.793000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 8 23:21:45.794000 audit[1421]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.794000 audit[1421]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdb94f9e80 a2=0 a3=7ffdb94f9e6c items=0 ppid=1373 pid=1421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.794000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 8 23:21:45.796000 audit[1423]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.796000 audit[1423]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffeba2dc2b0 a2=0 a3=7ffeba2dc29c items=0 ppid=1373 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:21:45.803000 audit[1427]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.803000 audit[1427]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe958ad730 a2=0 a3=7ffe958ad71c items=0 ppid=1373 pid=1427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.803000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:21:45.804000 audit[1428]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.804000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd55b98830 a2=0 a3=7ffd55b9881c items=0 ppid=1373 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.804000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:21:45.810795 kernel: Initializing XFRM netlink socket Feb 8 23:21:45.835012 env[1373]: time="2024-02-08T23:21:45.834984903Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:21:45.848000 audit[1435]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.848000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffee61dadc0 a2=0 a3=7ffee61dadac items=0 ppid=1373 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.848000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 8 23:21:45.857000 audit[1438]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.857000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcb8ec2020 a2=0 a3=7ffcb8ec200c items=0 ppid=1373 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.857000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 8 23:21:45.859000 audit[1441]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.859000 audit[1441]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc8c20a350 a2=0 a3=7ffc8c20a33c items=0 ppid=1373 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.859000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 8 23:21:45.861000 audit[1443]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.861000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffef0a2d530 a2=0 a3=7ffef0a2d51c items=0 ppid=1373 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 8 23:21:45.862000 audit[1445]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.862000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcf588e860 a2=0 a3=7ffcf588e84c items=0 ppid=1373 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.862000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 8 23:21:45.863000 audit[1447]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.863000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff0216c350 a2=0 a3=7fff0216c33c items=0 ppid=1373 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.863000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 8 23:21:45.865000 audit[1449]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.865000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffdef084c0 a2=0 a3=7fffdef084ac items=0 ppid=1373 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.865000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 8 23:21:45.870000 audit[1452]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.870000 audit[1452]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd8d582560 a2=0 a3=7ffd8d58254c items=0 ppid=1373 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.870000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 8 23:21:45.873000 audit[1454]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.873000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd5b22cda0 a2=0 a3=7ffd5b22cd8c items=0 ppid=1373 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.873000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:21:45.874000 audit[1456]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.874000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd15d5f370 a2=0 a3=7ffd15d5f35c items=0 ppid=1373 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.874000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:21:45.875000 audit[1458]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.875000 audit[1458]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc628d3880 a2=0 a3=7ffc628d386c items=0 ppid=1373 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.875000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 8 23:21:45.876818 systemd-networkd[1070]: docker0: Link UP Feb 8 23:21:45.909000 audit[1462]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.909000 audit[1462]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb91b94c0 a2=0 a3=7ffeb91b94ac items=0 ppid=1373 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.909000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:21:45.909000 audit[1463]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:21:45.909000 audit[1463]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb87ac8a0 a2=0 a3=7ffcb87ac88c items=0 ppid=1373 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:21:45.909000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:21:45.910776 env[1373]: time="2024-02-08T23:21:45.910733885Z" level=info msg="Loading containers: done." Feb 8 23:21:45.922756 env[1373]: time="2024-02-08T23:21:45.922714870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:21:45.922897 env[1373]: time="2024-02-08T23:21:45.922875657Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:21:45.922962 env[1373]: time="2024-02-08T23:21:45.922944983Z" level=info msg="Daemon has completed initialization" Feb 8 23:21:45.936432 systemd[1]: Started docker.service. Feb 8 23:21:45.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:45.939692 env[1373]: time="2024-02-08T23:21:45.939658068Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:21:45.954478 systemd[1]: Reloading. Feb 8 23:21:46.013830 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2024-02-08T23:21:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:21:46.013865 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2024-02-08T23:21:46Z" level=info msg="torcx already run" Feb 8 23:21:46.071152 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:21:46.071165 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:21:46.089509 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:21:46.150192 systemd[1]: Started kubelet.service. Feb 8 23:21:46.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:46.191769 kubelet[1560]: E0208 23:21:46.191717 1560 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:21:46.193468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:21:46.193690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:21:46.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:21:46.615818 env[1196]: time="2024-02-08T23:21:46.615773086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:21:47.449376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557685762.mount: Deactivated successfully. Feb 8 23:21:49.275927 env[1196]: time="2024-02-08T23:21:49.275864037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:49.277552 env[1196]: time="2024-02-08T23:21:49.277500128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:49.281059 env[1196]: time="2024-02-08T23:21:49.281029586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:49.281760 env[1196]: time="2024-02-08T23:21:49.281731046Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:49.282550 env[1196]: time="2024-02-08T23:21:49.282522235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:21:49.291084 env[1196]: time="2024-02-08T23:21:49.291053656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:21:51.391531 env[1196]: time="2024-02-08T23:21:51.391473936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:51.393836 env[1196]: time="2024-02-08T23:21:51.393808752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:51.395550 env[1196]: time="2024-02-08T23:21:51.395527264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:51.397085 env[1196]: time="2024-02-08T23:21:51.397057573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:51.397636 env[1196]: time="2024-02-08T23:21:51.397606435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:21:51.406165 env[1196]: time="2024-02-08T23:21:51.406126574Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:21:53.364890 env[1196]: time="2024-02-08T23:21:53.364832617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:53.366815 env[1196]: time="2024-02-08T23:21:53.366781196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:53.368796 env[1196]: time="2024-02-08T23:21:53.368750574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:53.371266 env[1196]: time="2024-02-08T23:21:53.371223620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:53.372074 env[1196]: time="2024-02-08T23:21:53.372036380Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:21:53.381570 env[1196]: time="2024-02-08T23:21:53.381519207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:21:54.635283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339324028.mount: Deactivated successfully. Feb 8 23:21:55.070044 env[1196]: time="2024-02-08T23:21:55.068341210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:55.072469 env[1196]: time="2024-02-08T23:21:55.072434354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:55.074066 env[1196]: time="2024-02-08T23:21:55.074023173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:55.075325 env[1196]: time="2024-02-08T23:21:55.075307230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:55.075861 env[1196]: time="2024-02-08T23:21:55.075836609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:21:55.085472 env[1196]: time="2024-02-08T23:21:55.085437628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:21:55.993514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365048082.mount: Deactivated successfully. Feb 8 23:21:55.998120 env[1196]: time="2024-02-08T23:21:55.998083523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:55.999741 env[1196]: time="2024-02-08T23:21:55.999690556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:56.001240 env[1196]: time="2024-02-08T23:21:56.001208829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:56.002568 env[1196]: time="2024-02-08T23:21:56.002541473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:56.002980 env[1196]: time="2024-02-08T23:21:56.002952965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:21:56.010720 env[1196]: time="2024-02-08T23:21:56.010687234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:21:56.373760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:21:56.373945 systemd[1]: Stopped kubelet.service. Feb 8 23:21:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.375184 systemd[1]: Started kubelet.service. Feb 8 23:21:56.376571 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 8 23:21:56.376618 kernel: audit: type=1130 audit(1707434516.372:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.376640 kernel: audit: type=1131 audit(1707434516.372:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.380748 kernel: audit: type=1130 audit(1707434516.373:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:21:56.413269 kubelet[1615]: E0208 23:21:56.413208 1615 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:21:56.416286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:21:56.416410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:21:56.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:21:56.419777 kernel: audit: type=1131 audit(1707434516.414:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:21:57.386084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794390760.mount: Deactivated successfully. Feb 8 23:22:01.807788 env[1196]: time="2024-02-08T23:22:01.807711738Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:01.809693 env[1196]: time="2024-02-08T23:22:01.809639804Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:01.811059 env[1196]: time="2024-02-08T23:22:01.811036000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:01.812758 env[1196]: time="2024-02-08T23:22:01.812722037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:01.813371 env[1196]: time="2024-02-08T23:22:01.813330083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:22:01.821440 env[1196]: time="2024-02-08T23:22:01.821408238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:22:02.441151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798960480.mount: Deactivated successfully. Feb 8 23:22:03.431561 env[1196]: time="2024-02-08T23:22:03.431512958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:03.433664 env[1196]: time="2024-02-08T23:22:03.433636698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:03.435067 env[1196]: time="2024-02-08T23:22:03.435026493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:03.436271 env[1196]: time="2024-02-08T23:22:03.436245936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:03.436671 env[1196]: time="2024-02-08T23:22:03.436647833Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:22:06.406328 systemd[1]: Stopped kubelet.service. Feb 8 23:22:06.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.408788 kernel: audit: type=1130 audit(1707434526.405:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.408850 kernel: audit: type=1131 audit(1707434526.407:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.421778 systemd[1]: Reloading. Feb 8 23:22:06.476190 /usr/lib/systemd/system-generators/torcx-generator[1721]: time="2024-02-08T23:22:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:22:06.476553 /usr/lib/systemd/system-generators/torcx-generator[1721]: time="2024-02-08T23:22:06Z" level=info msg="torcx already run" Feb 8 23:22:06.543833 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:22:06.543849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:22:06.562287 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:22:06.635068 systemd[1]: Started kubelet.service. Feb 8 23:22:06.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.637777 kernel: audit: type=1130 audit(1707434526.634:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:06.673894 kubelet[1769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:22:06.673894 kubelet[1769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:22:06.673894 kubelet[1769]: I0208 23:22:06.673845 1769 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:22:06.675000 kubelet[1769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:22:06.675000 kubelet[1769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:22:07.039806 kubelet[1769]: I0208 23:22:07.039699 1769 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:22:07.039806 kubelet[1769]: I0208 23:22:07.039733 1769 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:22:07.040075 kubelet[1769]: I0208 23:22:07.040051 1769 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:22:07.043111 kubelet[1769]: I0208 23:22:07.043090 1769 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:22:07.043531 kubelet[1769]: E0208 23:22:07.043508 1769 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.047944 kubelet[1769]: I0208 23:22:07.047915 1769 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:22:07.048519 kubelet[1769]: I0208 23:22:07.048491 1769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:22:07.048673 kubelet[1769]: I0208 23:22:07.048651 1769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:22:07.048794 kubelet[1769]: I0208 23:22:07.048686 1769 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:22:07.048794 kubelet[1769]: I0208 23:22:07.048702 1769 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:22:07.048919 kubelet[1769]: I0208 23:22:07.048891 1769 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:22:07.052511 kubelet[1769]: I0208 23:22:07.052487 1769 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:22:07.052511 kubelet[1769]: I0208 23:22:07.052516 1769 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:22:07.052684 kubelet[1769]: I0208 23:22:07.052538 1769 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:22:07.052684 kubelet[1769]: I0208 23:22:07.052551 1769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:22:07.057059 kubelet[1769]: I0208 23:22:07.057028 1769 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:22:07.057164 kubelet[1769]: W0208 23:22:07.057105 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.057296 kubelet[1769]: E0208 23:22:07.057183 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.057296 kubelet[1769]: W0208 23:22:07.057188 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.057296 kubelet[1769]: E0208 23:22:07.057243 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.057296 kubelet[1769]: W0208 23:22:07.057265 1769 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:22:07.057582 kubelet[1769]: I0208 23:22:07.057561 1769 server.go:1186] "Started kubelet" Feb 8 23:22:07.057816 kubelet[1769]: I0208 23:22:07.057795 1769 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:22:07.058300 kubelet[1769]: E0208 23:22:07.058200 1769 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206a7ee808e05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 22, 7, 57546757, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 22, 7, 57546757, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.76:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.76:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:22:07.058513 kubelet[1769]: E0208 23:22:07.058499 1769 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:22:07.058589 kubelet[1769]: E0208 23:22:07.058574 1769 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:22:07.058000 audit[1769]: AVC avc: denied { mac_admin } for pid=1769 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059057 1769 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059095 1769 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059137 1769 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059175 1769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:22:07.060866 kubelet[1769]: E0208 23:22:07.059761 1769 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059793 1769 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:22:07.060866 kubelet[1769]: I0208 23:22:07.059849 1769 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:22:07.060866 kubelet[1769]: W0208 23:22:07.060117 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.060866 kubelet[1769]: E0208 23:22:07.060155 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.060866 kubelet[1769]: E0208 23:22:07.060277 1769 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.058000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:07.063798 kernel: audit: type=1400 audit(1707434527.058:181): avc: denied { mac_admin } for pid=1769 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:07.063924 kernel: audit: type=1401 audit(1707434527.058:181): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:07.063950 kernel: audit: type=1300 audit(1707434527.058:181): arch=c000003e syscall=188 success=no exit=-22 a0=c000bfc0f0 a1=c00005d038 a2=c000bfc0c0 a3=25 items=0 ppid=1 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.058000 audit[1769]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bfc0f0 a1=c00005d038 a2=c000bfc0c0 a3=25 items=0 ppid=1 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.065175 kernel: audit: type=1327 audit(1707434527.058:181): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:07.058000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:07.074045 kernel: audit: type=1400 audit(1707434527.058:182): avc: denied { mac_admin } for pid=1769 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:07.074104 kernel: audit: type=1401 audit(1707434527.058:182): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:07.074129 kernel: audit: type=1300 audit(1707434527.058:182): arch=c000003e syscall=188 success=no exit=-22 a0=c000bf4460 a1=c00005d050 a2=c000bfc180 a3=25 items=0 ppid=1 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.058000 audit[1769]: AVC avc: denied { mac_admin } for pid=1769 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:07.058000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:07.058000 audit[1769]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bf4460 a1=c00005d050 a2=c000bfc180 a3=25 items=0 ppid=1 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.058000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:07.061000 audit[1781]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.061000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffba4f8030 a2=0 a3=7fffba4f801c items=0 ppid=1769 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:22:07.062000 audit[1782]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.062000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7765a7d0 a2=0 a3=7ffd7765a7bc items=0 ppid=1769 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:22:07.063000 audit[1784]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.063000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff0ded0590 a2=0 a3=7fff0ded057c items=0 ppid=1769 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:22:07.065000 audit[1786]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.065000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd017ca970 a2=0 a3=7ffd017ca95c items=0 ppid=1769 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:22:07.075000 audit[1790]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.075000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff3642ce50 a2=0 a3=7fff3642ce3c items=0 ppid=1769 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 8 23:22:07.077000 audit[1792]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.077000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd70019880 a2=0 a3=7ffd7001986c items=0 ppid=1769 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:22:07.081000 audit[1796]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.081000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd3e1931f0 a2=0 a3=7ffd3e1931dc items=0 ppid=1769 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:22:07.084000 audit[1799]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.084000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd660eeaf0 a2=0 a3=7ffd660eeadc items=0 ppid=1769 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:22:07.085000 audit[1800]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.085000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe944e51c0 a2=0 a3=7ffe944e51ac items=0 ppid=1769 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.085000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:22:07.086000 audit[1801]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.086000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb023c950 a2=0 a3=7fffb023c93c items=0 ppid=1769 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:22:07.088000 audit[1803]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.088000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd69b35730 a2=0 a3=7ffd69b3571c items=0 ppid=1769 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:22:07.090000 audit[1805]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.090000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe27c9ada0 a2=0 a3=7ffe27c9ad8c items=0 ppid=1769 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:22:07.091000 audit[1807]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.091000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffea25916a0 a2=0 a3=7ffea259168c items=0 ppid=1769 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:22:07.093000 audit[1811]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.093000 audit[1811]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffee80c2a60 a2=0 a3=7ffee80c2a4c items=0 ppid=1769 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:22:07.095752 kubelet[1769]: I0208 23:22:07.095722 1769 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:22:07.095752 kubelet[1769]: I0208 23:22:07.095746 1769 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:22:07.095752 kubelet[1769]: I0208 23:22:07.095795 1769 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:22:07.095000 audit[1813]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.095000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffdc4a39910 a2=0 a3=7ffdc4a398fc items=0 ppid=1769 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:22:07.096923 kubelet[1769]: I0208 23:22:07.096900 1769 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:22:07.096000 audit[1815]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.096000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2bf02930 a2=0 a3=7ffc2bf0291c items=0 ppid=1769 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:22:07.097000 audit[1814]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=1814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.097000 audit[1814]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7e9f7d20 a2=0 a3=7fff7e9f7d0c items=0 ppid=1769 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:22:07.098403 kubelet[1769]: I0208 23:22:07.098384 1769 policy_none.go:49] "None policy: Start" Feb 8 23:22:07.097000 audit[1816]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.097000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefc27e0f0 a2=0 a3=7ffefc27e0dc items=0 ppid=1769 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:22:07.098000 audit[1817]: NETFILTER_CFG table=nat:44 family=10 entries=2 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.098000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe98116850 a2=0 a3=7ffe9811683c items=0 ppid=1769 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:22:07.099282 kubelet[1769]: I0208 23:22:07.099180 1769 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:22:07.099282 kubelet[1769]: I0208 23:22:07.099195 1769 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:22:07.098000 audit[1818]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:07.098000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2c449ed0 a2=0 a3=7ffc2c449ebc items=0 ppid=1769 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:22:07.100000 audit[1820]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.100000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe64c71830 a2=0 a3=7ffe64c7181c items=0 ppid=1769 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:22:07.102000 audit[1821]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.102000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff6adc3890 a2=0 a3=7fff6adc387c items=0 ppid=1769 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:22:07.104979 kubelet[1769]: I0208 23:22:07.104954 1769 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:22:07.104000 audit[1769]: AVC avc: denied { mac_admin } for pid=1769 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:07.104000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:07.104000 audit[1769]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f25f80 a1=c000f28798 a2=c000f25f50 a3=25 items=0 ppid=1 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.104000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:07.105159 kubelet[1769]: I0208 23:22:07.105021 1769 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:22:07.105159 kubelet[1769]: I0208 23:22:07.105148 1769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:22:07.104000 audit[1823]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.104000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffaf6a86a0 a2=0 a3=7fffaf6a868c items=0 ppid=1769 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:22:07.105000 audit[1824]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.105000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd699bdf40 a2=0 a3=7ffd699bdf2c items=0 ppid=1769 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:22:07.106856 kubelet[1769]: E0208 23:22:07.106829 1769 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 8 23:22:07.106000 audit[1825]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.106000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe60e5d740 a2=0 a3=7ffe60e5d72c items=0 ppid=1769 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:22:07.108000 audit[1827]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.108000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffda824eb20 a2=0 a3=7ffda824eb0c items=0 ppid=1769 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:22:07.109000 audit[1829]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.109000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd8519c9b0 a2=0 a3=7ffd8519c99c items=0 ppid=1769 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:22:07.111000 audit[1831]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.111000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc24769a00 a2=0 a3=7ffc247699ec items=0 ppid=1769 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.111000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:22:07.112000 audit[1833]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.112000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff67224520 a2=0 a3=7fff6722450c items=0 ppid=1769 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:22:07.114000 audit[1835]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.114000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffd93fb2ca0 a2=0 a3=7ffd93fb2c8c items=0 ppid=1769 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:22:07.115904 kubelet[1769]: I0208 23:22:07.115877 1769 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:22:07.115939 kubelet[1769]: I0208 23:22:07.115905 1769 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:22:07.115939 kubelet[1769]: I0208 23:22:07.115927 1769 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:22:07.116001 kubelet[1769]: E0208 23:22:07.115990 1769 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:22:07.116314 kubelet[1769]: W0208 23:22:07.116283 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.116358 kubelet[1769]: E0208 23:22:07.116321 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.115000 audit[1836]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.115000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9f95e490 a2=0 a3=7fff9f95e47c items=0 ppid=1769 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:22:07.116000 audit[1837]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.116000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4cf29330 a2=0 a3=7ffe4cf2931c items=0 ppid=1769 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:22:07.117000 audit[1838]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:07.117000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe456a3500 a2=0 a3=7ffe456a34ec items=0 ppid=1769 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:07.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:22:07.161680 kubelet[1769]: I0208 23:22:07.161653 1769 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:22:07.161901 kubelet[1769]: E0208 23:22:07.161884 1769 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Feb 8 23:22:07.217075 kubelet[1769]: I0208 23:22:07.217047 1769 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:07.217869 kubelet[1769]: I0208 23:22:07.217849 1769 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:07.218416 kubelet[1769]: I0208 23:22:07.218389 1769 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:07.219957 kubelet[1769]: I0208 23:22:07.219932 1769 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.76:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.76:6443: connect: connection refused" Feb 8 23:22:07.220084 kubelet[1769]: I0208 23:22:07.220065 1769 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.76:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.76:6443: connect: connection refused" Feb 8 23:22:07.220183 kubelet[1769]: I0208 23:22:07.220167 1769 status_manager.go:698] "Failed to get status for pod" podUID=8fcf08633d3b588cdbac26a51ebec92f pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.76:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.76:6443: connect: connection refused" Feb 8 23:22:07.261060 kubelet[1769]: E0208 23:22:07.261016 1769 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.362056 kubelet[1769]: I0208 23:22:07.361381 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:22:07.362056 kubelet[1769]: I0208 23:22:07.361421 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:07.362056 kubelet[1769]: I0208 23:22:07.361443 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:07.362056 kubelet[1769]: I0208 23:22:07.361556 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:07.362056 kubelet[1769]: I0208 23:22:07.361645 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:07.362396 kubelet[1769]: I0208 23:22:07.361704 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:07.362396 kubelet[1769]: I0208 23:22:07.361736 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:07.362396 kubelet[1769]: I0208 23:22:07.361861 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:07.362396 kubelet[1769]: I0208 23:22:07.361944 1769 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:07.363336 kubelet[1769]: I0208 23:22:07.363215 1769 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:22:07.363580 kubelet[1769]: E0208 23:22:07.363559 1769 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Feb 8 23:22:07.521351 kubelet[1769]: E0208 23:22:07.521309 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:07.522016 env[1196]: time="2024-02-08T23:22:07.521956269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 8 23:22:07.524146 kubelet[1769]: E0208 23:22:07.524124 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:07.524222 kubelet[1769]: E0208 23:22:07.524180 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:07.524551 env[1196]: time="2024-02-08T23:22:07.524526881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 8 23:22:07.524614 env[1196]: time="2024-02-08T23:22:07.524578081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fcf08633d3b588cdbac26a51ebec92f,Namespace:kube-system,Attempt:0,}" Feb 8 23:22:07.661894 kubelet[1769]: E0208 23:22:07.661854 1769 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:07.750633 kubelet[1769]: E0208 23:22:07.750508 1769 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206a7ee808e05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 22, 7, 57546757, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 22, 7, 57546757, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.76:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.76:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:22:07.764572 kubelet[1769]: I0208 23:22:07.764547 1769 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:22:07.764900 kubelet[1769]: E0208 23:22:07.764865 1769 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Feb 8 23:22:08.065125 kubelet[1769]: W0208 23:22:08.064978 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.065125 kubelet[1769]: E0208 23:22:08.065056 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.083446 kubelet[1769]: W0208 23:22:08.083390 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.083446 kubelet[1769]: E0208 23:22:08.083444 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.132888 kubelet[1769]: W0208 23:22:08.132855 1769 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.132983 kubelet[1769]: E0208 23:22:08.132898 1769 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Feb 8 23:22:08.148960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805294380.mount: Deactivated successfully. Feb 8 23:22:08.155604 env[1196]: time="2024-02-08T23:22:08.155549274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.156456 env[1196]: time="2024-02-08T23:22:08.156405996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.157302 env[1196]: time="2024-02-08T23:22:08.157273395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.160319 env[1196]: time="2024-02-08T23:22:08.160270481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.161287 env[1196]: time="2024-02-08T23:22:08.161261282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.162972 env[1196]: time="2024-02-08T23:22:08.162945376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.163981 env[1196]: time="2024-02-08T23:22:08.163960417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.165085 env[1196]: time="2024-02-08T23:22:08.165058161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.165736 env[1196]: time="2024-02-08T23:22:08.165707816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.167650 env[1196]: time="2024-02-08T23:22:08.167630194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.169402 env[1196]: time="2024-02-08T23:22:08.169378976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.169999 env[1196]: time="2024-02-08T23:22:08.169978418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:08.194718 env[1196]: time="2024-02-08T23:22:08.194288043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:08.194718 env[1196]: time="2024-02-08T23:22:08.194410865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:08.194718 env[1196]: time="2024-02-08T23:22:08.194422864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:08.194718 env[1196]: time="2024-02-08T23:22:08.194553425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67c726bfa61cbe4b3c56fdd72ffb5bd9b3df14e056f6c64cdfa798540eef439a pid=1846 runtime=io.containerd.runc.v2 Feb 8 23:22:08.198726 env[1196]: time="2024-02-08T23:22:08.198492022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:08.198726 env[1196]: time="2024-02-08T23:22:08.198537363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:08.198726 env[1196]: time="2024-02-08T23:22:08.198547027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:08.198726 env[1196]: time="2024-02-08T23:22:08.198678148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd5db541743bda3df29ecb62689220d0913ed730b5421f9aaaccf2f047112f7d pid=1861 runtime=io.containerd.runc.v2 Feb 8 23:22:08.210492 env[1196]: time="2024-02-08T23:22:08.210419817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:08.210664 env[1196]: time="2024-02-08T23:22:08.210494620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:08.210664 env[1196]: time="2024-02-08T23:22:08.210526278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:08.210834 env[1196]: time="2024-02-08T23:22:08.210801523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74af57b248d09498b7d4354f04d5c6c07938ee893c7751e0b655228422c9c756 pid=1893 runtime=io.containerd.runc.v2 Feb 8 23:22:08.249247 env[1196]: time="2024-02-08T23:22:08.249197402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"67c726bfa61cbe4b3c56fdd72ffb5bd9b3df14e056f6c64cdfa798540eef439a\"" Feb 8 23:22:08.251936 env[1196]: time="2024-02-08T23:22:08.251914350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"74af57b248d09498b7d4354f04d5c6c07938ee893c7751e0b655228422c9c756\"" Feb 8 23:22:08.254736 env[1196]: time="2024-02-08T23:22:08.254715075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fcf08633d3b588cdbac26a51ebec92f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd5db541743bda3df29ecb62689220d0913ed730b5421f9aaaccf2f047112f7d\"" Feb 8 23:22:08.257467 kubelet[1769]: E0208 23:22:08.257266 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:08.257467 kubelet[1769]: E0208 23:22:08.257344 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:08.257467 kubelet[1769]: E0208 23:22:08.257401 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:08.260258 env[1196]: time="2024-02-08T23:22:08.260233198Z" level=info msg="CreateContainer within sandbox \"cd5db541743bda3df29ecb62689220d0913ed730b5421f9aaaccf2f047112f7d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:22:08.260402 env[1196]: time="2024-02-08T23:22:08.260380770Z" level=info msg="CreateContainer within sandbox \"67c726bfa61cbe4b3c56fdd72ffb5bd9b3df14e056f6c64cdfa798540eef439a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:22:08.260511 env[1196]: time="2024-02-08T23:22:08.260493357Z" level=info msg="CreateContainer within sandbox \"74af57b248d09498b7d4354f04d5c6c07938ee893c7751e0b655228422c9c756\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:22:08.280779 env[1196]: time="2024-02-08T23:22:08.280723158Z" level=info msg="CreateContainer within sandbox \"cd5db541743bda3df29ecb62689220d0913ed730b5421f9aaaccf2f047112f7d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"072a6df513d8a870cfff13c712daa817040254b9eef463b93f1079d5ec22441a\"" Feb 8 23:22:08.281216 env[1196]: time="2024-02-08T23:22:08.281186766Z" level=info msg="StartContainer for \"072a6df513d8a870cfff13c712daa817040254b9eef463b93f1079d5ec22441a\"" Feb 8 23:22:08.291626 env[1196]: time="2024-02-08T23:22:08.291594080Z" level=info msg="CreateContainer within sandbox \"67c726bfa61cbe4b3c56fdd72ffb5bd9b3df14e056f6c64cdfa798540eef439a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65b026a957a091e47e258a0f226de0c5ef107811a56828cb587bd12e618fff98\"" Feb 8 23:22:08.292623 env[1196]: time="2024-02-08T23:22:08.292584681Z" level=info msg="StartContainer for \"65b026a957a091e47e258a0f226de0c5ef107811a56828cb587bd12e618fff98\"" Feb 8 23:22:08.294919 env[1196]: time="2024-02-08T23:22:08.294895362Z" level=info msg="CreateContainer within sandbox \"74af57b248d09498b7d4354f04d5c6c07938ee893c7751e0b655228422c9c756\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17a9197c88716e7ce091bf0d7c1c3c91f3bdc575f6bea026493c35b5a224217e\"" Feb 8 23:22:08.295137 env[1196]: time="2024-02-08T23:22:08.295120946Z" level=info msg="StartContainer for \"17a9197c88716e7ce091bf0d7c1c3c91f3bdc575f6bea026493c35b5a224217e\"" Feb 8 23:22:08.342985 env[1196]: time="2024-02-08T23:22:08.342877478Z" level=info msg="StartContainer for \"072a6df513d8a870cfff13c712daa817040254b9eef463b93f1079d5ec22441a\" returns successfully" Feb 8 23:22:08.354924 env[1196]: time="2024-02-08T23:22:08.354883485Z" level=info msg="StartContainer for \"17a9197c88716e7ce091bf0d7c1c3c91f3bdc575f6bea026493c35b5a224217e\" returns successfully" Feb 8 23:22:08.356369 env[1196]: time="2024-02-08T23:22:08.356340161Z" level=info msg="StartContainer for \"65b026a957a091e47e258a0f226de0c5ef107811a56828cb587bd12e618fff98\" returns successfully" Feb 8 23:22:08.566447 kubelet[1769]: I0208 23:22:08.566410 1769 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:22:09.122518 kubelet[1769]: E0208 23:22:09.122485 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:09.124615 kubelet[1769]: E0208 23:22:09.124591 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:09.125521 kubelet[1769]: E0208 23:22:09.125486 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:09.643979 kubelet[1769]: E0208 23:22:09.643952 1769 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 8 23:22:09.700017 kubelet[1769]: I0208 23:22:09.699977 1769 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:22:10.053968 kubelet[1769]: I0208 23:22:10.053837 1769 apiserver.go:52] "Watching apiserver" Feb 8 23:22:10.260619 kubelet[1769]: I0208 23:22:10.260580 1769 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:22:10.279758 kubelet[1769]: I0208 23:22:10.279727 1769 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:22:10.659729 kubelet[1769]: E0208 23:22:10.659700 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:10.858488 kubelet[1769]: E0208 23:22:10.858456 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:11.057234 kubelet[1769]: E0208 23:22:11.057136 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:11.127510 kubelet[1769]: E0208 23:22:11.127481 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:11.127736 kubelet[1769]: E0208 23:22:11.127716 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:11.128081 kubelet[1769]: E0208 23:22:11.128055 1769 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:12.215012 systemd[1]: Reloading. Feb 8 23:22:12.277703 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2024-02-08T23:22:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:22:12.278045 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2024-02-08T23:22:12Z" level=info msg="torcx already run" Feb 8 23:22:12.355135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:22:12.355154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:22:12.380468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:22:12.463986 systemd[1]: Stopping kubelet.service... Feb 8 23:22:12.464148 kubelet[1769]: I0208 23:22:12.464016 1769 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:22:12.481089 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:22:12.481369 systemd[1]: Stopped kubelet.service. Feb 8 23:22:12.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:12.482120 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 8 23:22:12.482168 kernel: audit: type=1131 audit(1707434532.480:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:12.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:12.483062 systemd[1]: Started kubelet.service. Feb 8 23:22:12.486459 kernel: audit: type=1130 audit(1707434532.483:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:12.538471 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:22:12.538471 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:22:12.538882 kubelet[2143]: I0208 23:22:12.538499 2143 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:22:12.539694 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:22:12.539694 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:22:12.542330 kubelet[2143]: I0208 23:22:12.542314 2143 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:22:12.542330 kubelet[2143]: I0208 23:22:12.542328 2143 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:22:12.542467 kubelet[2143]: I0208 23:22:12.542457 2143 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:22:12.543479 kubelet[2143]: I0208 23:22:12.543462 2143 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:22:12.544041 kubelet[2143]: I0208 23:22:12.544018 2143 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:22:12.547566 kubelet[2143]: I0208 23:22:12.547548 2143 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:22:12.547906 kubelet[2143]: I0208 23:22:12.547893 2143 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:22:12.547956 kubelet[2143]: I0208 23:22:12.547948 2143 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:22:12.548046 kubelet[2143]: I0208 23:22:12.547964 2143 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:22:12.548046 kubelet[2143]: I0208 23:22:12.547973 2143 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:22:12.548046 kubelet[2143]: I0208 23:22:12.547997 2143 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:22:12.550533 kubelet[2143]: I0208 23:22:12.550497 2143 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:22:12.550533 kubelet[2143]: I0208 23:22:12.550527 2143 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:22:12.550661 kubelet[2143]: I0208 23:22:12.550545 2143 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:22:12.550661 kubelet[2143]: I0208 23:22:12.550557 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:22:12.552391 kubelet[2143]: I0208 23:22:12.552377 2143 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:22:12.552871 kubelet[2143]: I0208 23:22:12.552848 2143 server.go:1186] "Started kubelet" Feb 8 23:22:12.557000 audit[2143]: AVC avc: denied { mac_admin } for pid=2143 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:12.558847 kubelet[2143]: I0208 23:22:12.555179 2143 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:22:12.558847 kubelet[2143]: I0208 23:22:12.558125 2143 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:22:12.558847 kubelet[2143]: I0208 23:22:12.558152 2143 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:22:12.558847 kubelet[2143]: I0208 23:22:12.558168 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:22:12.557000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:12.563799 kernel: audit: type=1400 audit(1707434532.557:219): avc: denied { mac_admin } for pid=2143 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:12.563847 kernel: audit: type=1401 audit(1707434532.557:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:12.563872 kernel: audit: type=1300 audit(1707434532.557:219): arch=c000003e syscall=188 success=no exit=-22 a0=c000cf0570 a1=c0002c3470 a2=c000cf0540 a3=25 items=0 ppid=1 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:12.557000 audit[2143]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cf0570 a1=c0002c3470 a2=c000cf0540 a3=25 items=0 ppid=1 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:12.563949 kubelet[2143]: E0208 23:22:12.563755 2143 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:22:12.563949 kubelet[2143]: E0208 23:22:12.563795 2143 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:22:12.557000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:12.567936 kernel: audit: type=1327 audit(1707434532.557:219): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:12.567969 kernel: audit: type=1400 audit(1707434532.557:220): avc: denied { mac_admin } for pid=2143 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:12.557000 audit[2143]: AVC avc: denied { mac_admin } for pid=2143 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:12.569875 kubelet[2143]: I0208 23:22:12.569863 2143 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:22:12.570106 kubelet[2143]: I0208 23:22:12.570086 2143 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:22:12.570680 kubelet[2143]: I0208 23:22:12.570669 2143 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:22:12.557000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:12.572796 kernel: audit: type=1401 audit(1707434532.557:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:12.557000 audit[2143]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a012e0 a1=c0002c3488 a2=c000cf0600 a3=25 items=0 ppid=1 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:12.579127 kernel: audit: type=1300 audit(1707434532.557:220): arch=c000003e syscall=188 success=no exit=-22 a0=c000a012e0 a1=c0002c3488 a2=c000cf0600 a3=25 items=0 ppid=1 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:12.579259 kernel: audit: type=1327 audit(1707434532.557:220): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:12.557000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:12.605974 kubelet[2143]: I0208 23:22:12.605942 2143 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:22:12.620199 kubelet[2143]: I0208 23:22:12.620177 2143 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:22:12.620355 kubelet[2143]: I0208 23:22:12.620343 2143 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:22:12.620571 kubelet[2143]: I0208 23:22:12.620560 2143 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:22:12.623170 kubelet[2143]: E0208 23:22:12.623157 2143 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:22:12.644815 kubelet[2143]: I0208 23:22:12.644791 2143 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:22:12.644999 kubelet[2143]: I0208 23:22:12.644986 2143 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:22:12.645075 kubelet[2143]: I0208 23:22:12.645062 2143 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:22:12.645276 kubelet[2143]: I0208 23:22:12.645265 2143 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:22:12.645351 kubelet[2143]: I0208 23:22:12.645338 2143 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:22:12.645467 kubelet[2143]: I0208 23:22:12.645453 2143 policy_none.go:49] "None policy: Start" Feb 8 23:22:12.645863 kubelet[2143]: I0208 23:22:12.645853 2143 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:22:12.645934 kubelet[2143]: I0208 23:22:12.645921 2143 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:22:12.646095 kubelet[2143]: I0208 23:22:12.646084 2143 state_mem.go:75] "Updated machine memory state" Feb 8 23:22:12.647265 kubelet[2143]: I0208 23:22:12.647251 2143 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:22:12.646000 audit[2143]: AVC avc: denied { mac_admin } for pid=2143 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:22:12.646000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:22:12.646000 audit[2143]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009e12c0 a1=c000a06d80 a2=c0009e1290 a3=25 items=0 ppid=1 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:12.646000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:22:12.647660 kubelet[2143]: I0208 23:22:12.647643 2143 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:22:12.647908 kubelet[2143]: I0208 23:22:12.647897 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:22:12.675140 kubelet[2143]: I0208 23:22:12.675121 2143 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:22:12.681142 kubelet[2143]: I0208 23:22:12.681117 2143 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 8 23:22:12.681344 kubelet[2143]: I0208 23:22:12.681332 2143 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:22:12.723348 kubelet[2143]: I0208 23:22:12.723302 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:12.723490 kubelet[2143]: I0208 23:22:12.723390 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:12.723490 kubelet[2143]: I0208 23:22:12.723415 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:12.730669 kubelet[2143]: E0208 23:22:12.730634 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.756088 kubelet[2143]: E0208 23:22:12.755973 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 8 23:22:12.871347 kubelet[2143]: I0208 23:22:12.871288 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:22:12.871347 kubelet[2143]: I0208 23:22:12.871347 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:12.871347 kubelet[2143]: I0208 23:22:12.871368 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:12.871550 kubelet[2143]: I0208 23:22:12.871485 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.871643 kubelet[2143]: I0208 23:22:12.871547 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fcf08633d3b588cdbac26a51ebec92f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fcf08633d3b588cdbac26a51ebec92f\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:12.871643 kubelet[2143]: I0208 23:22:12.871580 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.871643 kubelet[2143]: I0208 23:22:12.871606 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.871710 kubelet[2143]: I0208 23:22:12.871648 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.871710 kubelet[2143]: I0208 23:22:12.871679 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:12.956453 kubelet[2143]: E0208 23:22:12.956405 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:13.032129 kubelet[2143]: E0208 23:22:13.032011 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:13.057040 kubelet[2143]: E0208 23:22:13.057021 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:13.269789 kubelet[2143]: E0208 23:22:13.257307 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:13.552406 kubelet[2143]: I0208 23:22:13.552369 2143 apiserver.go:52] "Watching apiserver" Feb 8 23:22:13.771198 kubelet[2143]: I0208 23:22:13.771148 2143 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:22:13.777438 kubelet[2143]: I0208 23:22:13.777400 2143 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:22:14.159611 kubelet[2143]: E0208 23:22:14.159556 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 8 23:22:14.160167 kubelet[2143]: E0208 23:22:14.160137 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.355746 kubelet[2143]: E0208 23:22:14.355710 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 8 23:22:14.356122 kubelet[2143]: E0208 23:22:14.356105 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.554881 kubelet[2143]: E0208 23:22:14.554778 2143 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 8 23:22:14.555578 kubelet[2143]: E0208 23:22:14.555568 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.642145 kubelet[2143]: E0208 23:22:14.642120 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.643223 kubelet[2143]: E0208 23:22:14.642733 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.643389 kubelet[2143]: E0208 23:22:14.643363 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:14.766758 kubelet[2143]: I0208 23:22:14.766724 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.766678557 pod.CreationTimestamp="2024-02-08 23:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:14.766195124 +0000 UTC m=+2.279614378" watchObservedRunningTime="2024-02-08 23:22:14.766678557 +0000 UTC m=+2.280097800" Feb 8 23:22:15.441127 sudo[1354]: pam_unix(sudo:session): session closed for user root Feb 8 23:22:15.440000 audit[1354]: USER_END pid=1354 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:22:15.440000 audit[1354]: CRED_DISP pid=1354 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:22:15.442859 sshd[1349]: pam_unix(sshd:session): session closed for user core Feb 8 23:22:15.442000 audit[1349]: USER_END pid=1349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:15.443000 audit[1349]: CRED_DISP pid=1349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:15.445089 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:35374.service: Deactivated successfully. Feb 8 23:22:15.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.76:22-10.0.0.1:35374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:15.447241 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:22:15.447263 systemd-logind[1177]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:22:15.448211 systemd-logind[1177]: Removed session 7. Feb 8 23:22:15.555899 kubelet[2143]: I0208 23:22:15.555861 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.5557999989999995 pod.CreationTimestamp="2024-02-08 23:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:15.555587144 +0000 UTC m=+3.069006407" watchObservedRunningTime="2024-02-08 23:22:15.555799999 +0000 UTC m=+3.069219252" Feb 8 23:22:15.870122 kubelet[2143]: E0208 23:22:15.870021 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:15.956034 kubelet[2143]: I0208 23:22:15.955986 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.955950346 pod.CreationTimestamp="2024-02-08 23:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:15.955805557 +0000 UTC m=+3.469224800" watchObservedRunningTime="2024-02-08 23:22:15.955950346 +0000 UTC m=+3.469369589" Feb 8 23:22:17.868239 kubelet[2143]: E0208 23:22:17.868189 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:19.340695 kubelet[2143]: E0208 23:22:19.340667 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:19.648475 kubelet[2143]: E0208 23:22:19.648433 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:24.723780 kubelet[2143]: I0208 23:22:24.723729 2143 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:22:24.724236 env[1196]: time="2024-02-08T23:22:24.724150488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:22:24.724475 kubelet[2143]: I0208 23:22:24.724302 2143 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:22:25.697219 kubelet[2143]: I0208 23:22:25.697183 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:25.756791 kubelet[2143]: I0208 23:22:25.756744 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aead270e-60dc-4c1f-a53e-e984bd5f814a-xtables-lock\") pod \"kube-proxy-wrxhw\" (UID: \"aead270e-60dc-4c1f-a53e-e984bd5f814a\") " pod="kube-system/kube-proxy-wrxhw" Feb 8 23:22:25.756791 kubelet[2143]: I0208 23:22:25.756788 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvxj\" (UniqueName: \"kubernetes.io/projected/aead270e-60dc-4c1f-a53e-e984bd5f814a-kube-api-access-djvxj\") pod \"kube-proxy-wrxhw\" (UID: \"aead270e-60dc-4c1f-a53e-e984bd5f814a\") " pod="kube-system/kube-proxy-wrxhw" Feb 8 23:22:25.757317 kubelet[2143]: I0208 23:22:25.756808 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aead270e-60dc-4c1f-a53e-e984bd5f814a-kube-proxy\") pod \"kube-proxy-wrxhw\" (UID: \"aead270e-60dc-4c1f-a53e-e984bd5f814a\") " pod="kube-system/kube-proxy-wrxhw" Feb 8 23:22:25.757317 kubelet[2143]: I0208 23:22:25.756826 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aead270e-60dc-4c1f-a53e-e984bd5f814a-lib-modules\") pod \"kube-proxy-wrxhw\" (UID: \"aead270e-60dc-4c1f-a53e-e984bd5f814a\") " pod="kube-system/kube-proxy-wrxhw" Feb 8 23:22:25.875745 kubelet[2143]: E0208 23:22:25.875709 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:25.989673 kubelet[2143]: I0208 23:22:25.989553 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:26.058037 kubelet[2143]: I0208 23:22:26.058000 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bjng\" (UniqueName: \"kubernetes.io/projected/0546721e-0117-49f5-b0e5-1331fbdb34e0-kube-api-access-6bjng\") pod \"tigera-operator-cfc98749c-44vgs\" (UID: \"0546721e-0117-49f5-b0e5-1331fbdb34e0\") " pod="tigera-operator/tigera-operator-cfc98749c-44vgs" Feb 8 23:22:26.058037 kubelet[2143]: I0208 23:22:26.058039 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0546721e-0117-49f5-b0e5-1331fbdb34e0-var-lib-calico\") pod \"tigera-operator-cfc98749c-44vgs\" (UID: \"0546721e-0117-49f5-b0e5-1331fbdb34e0\") " pod="tigera-operator/tigera-operator-cfc98749c-44vgs" Feb 8 23:22:26.301037 kubelet[2143]: E0208 23:22:26.300895 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:26.301481 env[1196]: time="2024-02-08T23:22:26.301443991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrxhw,Uid:aead270e-60dc-4c1f-a53e-e984bd5f814a,Namespace:kube-system,Attempt:0,}" Feb 8 23:22:26.359285 env[1196]: time="2024-02-08T23:22:26.359215674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:26.359285 env[1196]: time="2024-02-08T23:22:26.359258450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:26.359285 env[1196]: time="2024-02-08T23:22:26.359271476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:26.359510 env[1196]: time="2024-02-08T23:22:26.359433563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/957c1ce511b1e92c589eedcf83b48ccce931d1024effa00cff0d91e8484840e3 pid=2261 runtime=io.containerd.runc.v2 Feb 8 23:22:26.391440 env[1196]: time="2024-02-08T23:22:26.391384371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrxhw,Uid:aead270e-60dc-4c1f-a53e-e984bd5f814a,Namespace:kube-system,Attempt:0,} returns sandbox id \"957c1ce511b1e92c589eedcf83b48ccce931d1024effa00cff0d91e8484840e3\"" Feb 8 23:22:26.392206 kubelet[2143]: E0208 23:22:26.392185 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:26.394114 env[1196]: time="2024-02-08T23:22:26.394076581Z" level=info msg="CreateContainer within sandbox \"957c1ce511b1e92c589eedcf83b48ccce931d1024effa00cff0d91e8484840e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:22:26.409101 env[1196]: time="2024-02-08T23:22:26.409044443Z" level=info msg="CreateContainer within sandbox \"957c1ce511b1e92c589eedcf83b48ccce931d1024effa00cff0d91e8484840e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92ec93d2c54cb2f4491d182bb87c404499bad1b8aa58c2b3d414addc9cd27ef1\"" Feb 8 23:22:26.409664 env[1196]: time="2024-02-08T23:22:26.409621918Z" level=info msg="StartContainer for \"92ec93d2c54cb2f4491d182bb87c404499bad1b8aa58c2b3d414addc9cd27ef1\"" Feb 8 23:22:26.470030 env[1196]: time="2024-02-08T23:22:26.469967442Z" level=info msg="StartContainer for \"92ec93d2c54cb2f4491d182bb87c404499bad1b8aa58c2b3d414addc9cd27ef1\" returns successfully" Feb 8 23:22:26.499804 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 8 23:22:26.499924 kernel: audit: type=1325 audit(1707434546.496:227): table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.499944 kernel: audit: type=1300 audit(1707434546.496:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc666ed560 a2=0 a3=7ffc666ed54c items=0 ppid=2312 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit[2352]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.496000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc666ed560 a2=0 a3=7ffc666ed54c items=0 ppid=2312 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:22:26.503736 kernel: audit: type=1327 audit(1707434546.496:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:22:26.503817 kernel: audit: type=1325 audit(1707434546.496:228): table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.496000 audit[2353]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.496000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfce3d020 a2=0 a3=7ffcfce3d00c items=0 ppid=2312 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.508676 kernel: audit: type=1300 audit(1707434546.496:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfce3d020 a2=0 a3=7ffcfce3d00c items=0 ppid=2312 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:22:26.496000 audit[2354]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.511940 kernel: audit: type=1327 audit(1707434546.496:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:22:26.512043 kernel: audit: type=1325 audit(1707434546.496:229): table=nat:61 family=10 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.512072 kernel: audit: type=1300 audit(1707434546.496:229): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab3c1b60 a2=0 a3=7ffeab3c1b4c items=0 ppid=2312 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab3c1b60 a2=0 a3=7ffeab3c1b4c items=0 ppid=2312 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:22:26.537056 kernel: audit: type=1327 audit(1707434546.496:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:22:26.537209 kernel: audit: type=1325 audit(1707434546.496:230): table=nat:62 family=2 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.496000 audit[2356]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.496000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3f48ef80 a2=0 a3=7ffd3f48ef6c items=0 ppid=2312 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:22:26.498000 audit[2357]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.498000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef85c77d0 a2=0 a3=7ffef85c77bc items=0 ppid=2312 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:22:26.498000 audit[2358]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.498000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffacb15fe0 a2=0 a3=7fffacb15fcc items=0 ppid=2312 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:22:26.595941 env[1196]: time="2024-02-08T23:22:26.595821934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-44vgs,Uid:0546721e-0117-49f5-b0e5-1331fbdb34e0,Namespace:tigera-operator,Attempt:0,}" Feb 8 23:22:26.602000 audit[2359]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.602000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdb9fc6cf0 a2=0 a3=7ffdb9fc6cdc items=0 ppid=2312 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:22:26.604000 audit[2361]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.604000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed99cfef0 a2=0 a3=7ffed99cfedc items=0 ppid=2312 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 8 23:22:26.607000 audit[2364]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.607000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff17a02490 a2=0 a3=7fff17a0247c items=0 ppid=2312 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 8 23:22:26.609000 audit[2365]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.609000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe44bc5760 a2=0 a3=7ffe44bc574c items=0 ppid=2312 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:22:26.611000 audit[2367]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.611000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff60aab000 a2=0 a3=7fff60aaafec items=0 ppid=2312 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:22:26.612000 audit[2368]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.612000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8e7127e0 a2=0 a3=7ffc8e7127cc items=0 ppid=2312 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:22:26.615000 audit[2370]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.615000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf2762a60 a2=0 a3=7ffcf2762a4c items=0 ppid=2312 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:22:26.618000 audit[2373]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.618000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf498bb90 a2=0 a3=7ffcf498bb7c items=0 ppid=2312 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 8 23:22:26.619000 audit[2374]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.619000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2f1cab50 a2=0 a3=7fff2f1cab3c items=0 ppid=2312 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:22:26.621000 audit[2376]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.621000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf0ab0be0 a2=0 a3=7ffdf0ab0bcc items=0 ppid=2312 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:22:26.623000 audit[2377]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.623000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1976d550 a2=0 a3=7ffc1976d53c items=0 ppid=2312 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:22:26.626000 audit[2379]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.626000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccc0b8770 a2=0 a3=7ffccc0b875c items=0 ppid=2312 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.626000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:22:26.629000 audit[2382]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.629000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcaf29f070 a2=0 a3=7ffcaf29f05c items=0 ppid=2312 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:22:26.632000 audit[2385]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.632000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc91afc7f0 a2=0 a3=7ffc91afc7dc items=0 ppid=2312 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:22:26.633000 audit[2386]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.633000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8fef0060 a2=0 a3=7fff8fef004c items=0 ppid=2312 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:22:26.635000 audit[2388]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.635000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff1be26230 a2=0 a3=7fff1be2621c items=0 ppid=2312 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:22:26.638000 audit[2391]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:22:26.638000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0d0be7b0 a2=0 a3=7ffd0d0be79c items=0 ppid=2312 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:22:26.648000 audit[2396]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:26.648000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fffcdac3e90 a2=0 a3=7fffcdac3e7c items=0 ppid=2312 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:26.649997 env[1196]: time="2024-02-08T23:22:26.648129313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:26.649997 env[1196]: time="2024-02-08T23:22:26.648168563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:26.649997 env[1196]: time="2024-02-08T23:22:26.648185266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:26.649997 env[1196]: time="2024-02-08T23:22:26.648330169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d8b252e7b7c5d4f7adade2dc409e2127ed84c6f1e6f27cf1156fe6e365211a2 pid=2403 runtime=io.containerd.runc.v2 Feb 8 23:22:26.654000 audit[2396]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:26.654000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffcdac3e90 a2=0 a3=7fffcdac3e7c items=0 ppid=2312 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:26.659699 kubelet[2143]: E0208 23:22:26.659671 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:26.660000 audit[2427]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.660000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4fa201b0 a2=0 a3=7ffe4fa2019c items=0 ppid=2312 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:22:26.663000 audit[2429]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.663000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd16e42540 a2=0 a3=7ffd16e4252c items=0 ppid=2312 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 8 23:22:26.667000 audit[2437]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.667000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe316dedd0 a2=0 a3=7ffe316dedbc items=0 ppid=2312 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 8 23:22:26.670000 audit[2440]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.670000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5682f1c0 a2=0 a3=7ffe5682f1ac items=0 ppid=2312 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:22:26.672000 audit[2442]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.672000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd84e18360 a2=0 a3=7ffd84e1834c items=0 ppid=2312 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:22:26.674000 audit[2443]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.674000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc46da1d60 a2=0 a3=7ffc46da1d4c items=0 ppid=2312 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:22:26.676000 audit[2445]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.676000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd757f1bb0 a2=0 a3=7ffd757f1b9c items=0 ppid=2312 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 8 23:22:26.679000 audit[2448]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.679000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe86ecf120 a2=0 a3=7ffe86ecf10c items=0 ppid=2312 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:22:26.680000 audit[2449]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.680000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce148efe0 a2=0 a3=7ffce148efcc items=0 ppid=2312 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:22:26.682000 audit[2451]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.682000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd924dbf0 a2=0 a3=7fffd924dbdc items=0 ppid=2312 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:22:26.683000 audit[2452]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.683000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdab8b7df0 a2=0 a3=7ffdab8b7ddc items=0 ppid=2312 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:22:26.685000 audit[2454]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.685000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeaa5b3e90 a2=0 a3=7ffeaa5b3e7c items=0 ppid=2312 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:22:26.688000 audit[2457]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.688000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda9e51de0 a2=0 a3=7ffda9e51dcc items=0 ppid=2312 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:22:26.692000 audit[2466]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.692000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6fb62eb0 a2=0 a3=7ffd6fb62e9c items=0 ppid=2312 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 8 23:22:26.692000 audit[2467]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.692000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa2526540 a2=0 a3=7fffa252652c items=0 ppid=2312 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:22:26.694000 audit[2469]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.694000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe73dcd500 a2=0 a3=7ffe73dcd4ec items=0 ppid=2312 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:22:26.697653 env[1196]: time="2024-02-08T23:22:26.697615813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-44vgs,Uid:0546721e-0117-49f5-b0e5-1331fbdb34e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3d8b252e7b7c5d4f7adade2dc409e2127ed84c6f1e6f27cf1156fe6e365211a2\"" Feb 8 23:22:26.697000 audit[2472]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:22:26.697000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd270d16e0 a2=0 a3=7ffd270d16cc items=0 ppid=2312 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:22:26.699897 env[1196]: time="2024-02-08T23:22:26.699839489Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 8 23:22:26.703000 audit[2476]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:22:26.703000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdabdeba40 a2=0 a3=7ffdabdeba2c items=0 ppid=2312 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.703000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:26.703000 audit[2476]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:22:26.703000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffdabdeba40 a2=0 a3=7ffdabdeba2c items=0 ppid=2312 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:26.703000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:27.662200 kubelet[2143]: E0208 23:22:27.662159 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:27.874206 kubelet[2143]: E0208 23:22:27.874174 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:27.894551 kubelet[2143]: I0208 23:22:27.894480 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wrxhw" podStartSLOduration=2.894433814 pod.CreationTimestamp="2024-02-08 23:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:26.903491998 +0000 UTC m=+14.416911251" watchObservedRunningTime="2024-02-08 23:22:27.894433814 +0000 UTC m=+15.407853067" Feb 8 23:22:28.169977 update_engine[1180]: I0208 23:22:28.169852 1180 update_attempter.cc:509] Updating boot flags... Feb 8 23:22:28.925595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476039400.mount: Deactivated successfully. Feb 8 23:22:30.959940 env[1196]: time="2024-02-08T23:22:30.959880181Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:30.961988 env[1196]: time="2024-02-08T23:22:30.961958790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:30.964368 env[1196]: time="2024-02-08T23:22:30.964331584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:30.965965 env[1196]: time="2024-02-08T23:22:30.965928475Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:30.966522 env[1196]: time="2024-02-08T23:22:30.966490604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 8 23:22:30.968516 env[1196]: time="2024-02-08T23:22:30.967941183Z" level=info msg="CreateContainer within sandbox \"3d8b252e7b7c5d4f7adade2dc409e2127ed84c6f1e6f27cf1156fe6e365211a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 8 23:22:30.979119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195085292.mount: Deactivated successfully. Feb 8 23:22:30.980331 env[1196]: time="2024-02-08T23:22:30.980262879Z" level=info msg="CreateContainer within sandbox \"3d8b252e7b7c5d4f7adade2dc409e2127ed84c6f1e6f27cf1156fe6e365211a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a10c4e4f77bc8e3edd683c34707851235eff35bb3533d8aedbfcbc9da0374165\"" Feb 8 23:22:30.980899 env[1196]: time="2024-02-08T23:22:30.980864587Z" level=info msg="StartContainer for \"a10c4e4f77bc8e3edd683c34707851235eff35bb3533d8aedbfcbc9da0374165\"" Feb 8 23:22:31.020609 env[1196]: time="2024-02-08T23:22:31.020568428Z" level=info msg="StartContainer for \"a10c4e4f77bc8e3edd683c34707851235eff35bb3533d8aedbfcbc9da0374165\" returns successfully" Feb 8 23:22:31.680100 kubelet[2143]: I0208 23:22:31.680073 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-44vgs" podStartSLOduration=-9.223372030174734e+09 pod.CreationTimestamp="2024-02-08 23:22:25 +0000 UTC" firstStartedPulling="2024-02-08 23:22:26.69903718 +0000 UTC m=+14.212456434" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:31.679938096 +0000 UTC m=+19.193357349" watchObservedRunningTime="2024-02-08 23:22:31.680041522 +0000 UTC m=+19.193460775" Feb 8 23:22:32.971000 audit[2552]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:32.973236 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 8 23:22:32.973303 kernel: audit: type=1325 audit(1707434552.971:271): table=filter:103 family=2 entries=13 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:32.971000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fffb5be5b70 a2=0 a3=7fffb5be5b5c items=0 ppid=2312 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:32.980006 kernel: audit: type=1300 audit(1707434552.971:271): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fffb5be5b70 a2=0 a3=7fffb5be5b5c items=0 ppid=2312 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:32.980145 kernel: audit: type=1327 audit(1707434552.971:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:32.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:32.972000 audit[2552]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:32.972000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffb5be5b70 a2=0 a3=7fffb5be5b5c items=0 ppid=2312 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:32.995040 kernel: audit: type=1325 audit(1707434552.972:272): table=nat:104 family=2 entries=20 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:32.995090 kernel: audit: type=1300 audit(1707434552.972:272): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffb5be5b70 a2=0 a3=7fffb5be5b5c items=0 ppid=2312 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:32.995112 kernel: audit: type=1327 audit(1707434552.972:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:32.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:33.022000 audit[2578]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:33.022000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff0d0236d0 a2=0 a3=7fff0d0236bc items=0 ppid=2312 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:33.027808 kernel: audit: type=1325 audit(1707434553.022:273): table=filter:105 family=2 entries=14 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:33.027859 kernel: audit: type=1300 audit(1707434553.022:273): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff0d0236d0 a2=0 a3=7fff0d0236bc items=0 ppid=2312 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:33.027906 kernel: audit: type=1327 audit(1707434553.022:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:33.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:33.022000 audit[2578]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:33.022000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff0d0236d0 a2=0 a3=7fff0d0236bc items=0 ppid=2312 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:33.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:33.032794 kernel: audit: type=1325 audit(1707434553.022:274): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:33.143498 kubelet[2143]: I0208 23:22:33.143465 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:33.241779 kubelet[2143]: I0208 23:22:33.241627 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:33.312106 kubelet[2143]: I0208 23:22:33.312046 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8j5\" (UniqueName: \"kubernetes.io/projected/879c6375-9937-49ab-b33a-e4f754cac508-kube-api-access-tm8j5\") pod \"calico-typha-7f7bb6b67-9l75w\" (UID: \"879c6375-9937-49ab-b33a-e4f754cac508\") " pod="calico-system/calico-typha-7f7bb6b67-9l75w" Feb 8 23:22:33.312106 kubelet[2143]: I0208 23:22:33.312094 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/879c6375-9937-49ab-b33a-e4f754cac508-typha-certs\") pod \"calico-typha-7f7bb6b67-9l75w\" (UID: \"879c6375-9937-49ab-b33a-e4f754cac508\") " pod="calico-system/calico-typha-7f7bb6b67-9l75w" Feb 8 23:22:33.312106 kubelet[2143]: I0208 23:22:33.312114 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879c6375-9937-49ab-b33a-e4f754cac508-tigera-ca-bundle\") pod \"calico-typha-7f7bb6b67-9l75w\" (UID: \"879c6375-9937-49ab-b33a-e4f754cac508\") " pod="calico-system/calico-typha-7f7bb6b67-9l75w" Feb 8 23:22:33.396840 kubelet[2143]: I0208 23:22:33.396790 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:22:33.397355 kubelet[2143]: E0208 23:22:33.397325 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:33.412453 kubelet[2143]: I0208 23:22:33.412413 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fc690ce-af04-4df5-9869-08171a5b32af-tigera-ca-bundle\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.412715 kubelet[2143]: I0208 23:22:33.412691 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-flexvol-driver-host\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413452 kubelet[2143]: I0208 23:22:33.413432 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5fc690ce-af04-4df5-9869-08171a5b32af-node-certs\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413526 kubelet[2143]: I0208 23:22:33.413488 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-var-run-calico\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413691 kubelet[2143]: I0208 23:22:33.413657 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-xtables-lock\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413743 kubelet[2143]: I0208 23:22:33.413705 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-var-lib-calico\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413743 kubelet[2143]: I0208 23:22:33.413729 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-cni-net-dir\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413807 kubelet[2143]: I0208 23:22:33.413755 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5q8j\" (UniqueName: \"kubernetes.io/projected/5fc690ce-af04-4df5-9869-08171a5b32af-kube-api-access-s5q8j\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413807 kubelet[2143]: I0208 23:22:33.413795 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-policysync\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413861 kubelet[2143]: I0208 23:22:33.413853 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-cni-bin-dir\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413887 kubelet[2143]: I0208 23:22:33.413879 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-lib-modules\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.413911 kubelet[2143]: I0208 23:22:33.413898 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5fc690ce-af04-4df5-9869-08171a5b32af-cni-log-dir\") pod \"calico-node-zgc8j\" (UID: \"5fc690ce-af04-4df5-9869-08171a5b32af\") " pod="calico-system/calico-node-zgc8j" Feb 8 23:22:33.514663 kubelet[2143]: I0208 23:22:33.514594 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2f044ac1-9cb8-43bc-bcbe-22f291a59d64-socket-dir\") pod \"csi-node-driver-k779c\" (UID: \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\") " pod="calico-system/csi-node-driver-k779c" Feb 8 23:22:33.514901 kubelet[2143]: I0208 23:22:33.514886 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2f044ac1-9cb8-43bc-bcbe-22f291a59d64-varrun\") pod \"csi-node-driver-k779c\" (UID: \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\") " pod="calico-system/csi-node-driver-k779c" Feb 8 23:22:33.515101 kubelet[2143]: E0208 23:22:33.515082 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515101 kubelet[2143]: W0208 23:22:33.515095 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515176 kubelet[2143]: E0208 23:22:33.515111 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515250 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515729 kubelet[2143]: W0208 23:22:33.515257 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515265 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515359 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515729 kubelet[2143]: W0208 23:22:33.515364 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515373 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515489 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515729 kubelet[2143]: W0208 23:22:33.515495 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515505 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515729 kubelet[2143]: E0208 23:22:33.515643 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515987 kubelet[2143]: W0208 23:22:33.515650 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515987 kubelet[2143]: E0208 23:22:33.515663 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515987 kubelet[2143]: E0208 23:22:33.515870 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.515987 kubelet[2143]: W0208 23:22:33.515881 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.515987 kubelet[2143]: E0208 23:22:33.515898 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.515987 kubelet[2143]: I0208 23:22:33.515919 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2f044ac1-9cb8-43bc-bcbe-22f291a59d64-registration-dir\") pod \"csi-node-driver-k779c\" (UID: \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\") " pod="calico-system/csi-node-driver-k779c" Feb 8 23:22:33.516112 kubelet[2143]: E0208 23:22:33.516053 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516112 kubelet[2143]: W0208 23:22:33.516061 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516112 kubelet[2143]: E0208 23:22:33.516076 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.516242 kubelet[2143]: E0208 23:22:33.516216 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516242 kubelet[2143]: W0208 23:22:33.516225 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516291 kubelet[2143]: E0208 23:22:33.516246 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.516355 kubelet[2143]: E0208 23:22:33.516338 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516355 kubelet[2143]: W0208 23:22:33.516349 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516422 kubelet[2143]: E0208 23:22:33.516413 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.516534 kubelet[2143]: E0208 23:22:33.516499 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516534 kubelet[2143]: W0208 23:22:33.516511 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516602 kubelet[2143]: E0208 23:22:33.516571 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.516633 kubelet[2143]: E0208 23:22:33.516615 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516633 kubelet[2143]: W0208 23:22:33.516622 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516779 kubelet[2143]: E0208 23:22:33.516750 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516779 kubelet[2143]: W0208 23:22:33.516776 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516870 kubelet[2143]: E0208 23:22:33.516791 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.516979 kubelet[2143]: E0208 23:22:33.516958 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.516979 kubelet[2143]: W0208 23:22:33.516970 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.516979 kubelet[2143]: E0208 23:22:33.516979 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.517094 kubelet[2143]: E0208 23:22:33.517085 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.517094 kubelet[2143]: W0208 23:22:33.517093 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.517140 kubelet[2143]: E0208 23:22:33.517101 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.517205 kubelet[2143]: E0208 23:22:33.517190 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.517205 kubelet[2143]: W0208 23:22:33.517200 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.517205 kubelet[2143]: E0208 23:22:33.517207 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.517330 kubelet[2143]: E0208 23:22:33.517317 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.517330 kubelet[2143]: W0208 23:22:33.517323 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.517330 kubelet[2143]: E0208 23:22:33.517330 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.517418 kubelet[2143]: E0208 23:22:33.517383 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520306 kubelet[2143]: E0208 23:22:33.519954 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520306 kubelet[2143]: W0208 23:22:33.519963 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520306 kubelet[2143]: E0208 23:22:33.520028 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520306 kubelet[2143]: E0208 23:22:33.520176 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520306 kubelet[2143]: W0208 23:22:33.520197 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520306 kubelet[2143]: E0208 23:22:33.520266 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520465 kubelet[2143]: E0208 23:22:33.520395 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520465 kubelet[2143]: W0208 23:22:33.520401 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520465 kubelet[2143]: E0208 23:22:33.520449 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520553 kubelet[2143]: E0208 23:22:33.520533 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520553 kubelet[2143]: W0208 23:22:33.520543 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520713 kubelet[2143]: E0208 23:22:33.520573 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520713 kubelet[2143]: E0208 23:22:33.520644 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520713 kubelet[2143]: W0208 23:22:33.520650 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520713 kubelet[2143]: E0208 23:22:33.520668 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.520911 kubelet[2143]: E0208 23:22:33.520748 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.520911 kubelet[2143]: W0208 23:22:33.520754 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.520911 kubelet[2143]: E0208 23:22:33.520811 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521001 kubelet[2143]: E0208 23:22:33.520941 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521001 kubelet[2143]: W0208 23:22:33.520948 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521001 kubelet[2143]: E0208 23:22:33.520958 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521087 kubelet[2143]: E0208 23:22:33.521081 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521087 kubelet[2143]: W0208 23:22:33.521087 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521146 kubelet[2143]: E0208 23:22:33.521098 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521230 kubelet[2143]: E0208 23:22:33.521218 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521230 kubelet[2143]: W0208 23:22:33.521225 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521230 kubelet[2143]: E0208 23:22:33.521232 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521360 kubelet[2143]: E0208 23:22:33.521343 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521360 kubelet[2143]: W0208 23:22:33.521353 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521449 kubelet[2143]: E0208 23:22:33.521370 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521488 kubelet[2143]: E0208 23:22:33.521463 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521488 kubelet[2143]: W0208 23:22:33.521471 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521488 kubelet[2143]: E0208 23:22:33.521481 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521609 kubelet[2143]: E0208 23:22:33.521596 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521609 kubelet[2143]: W0208 23:22:33.521604 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521658 kubelet[2143]: E0208 23:22:33.521615 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521734 kubelet[2143]: E0208 23:22:33.521724 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521734 kubelet[2143]: W0208 23:22:33.521732 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521805 kubelet[2143]: E0208 23:22:33.521743 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521868 kubelet[2143]: E0208 23:22:33.521858 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521868 kubelet[2143]: W0208 23:22:33.521866 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.521913 kubelet[2143]: E0208 23:22:33.521877 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.521991 kubelet[2143]: E0208 23:22:33.521981 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.521991 kubelet[2143]: W0208 23:22:33.521990 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522036 kubelet[2143]: E0208 23:22:33.522001 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522036 kubelet[2143]: I0208 23:22:33.522017 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f044ac1-9cb8-43bc-bcbe-22f291a59d64-kubelet-dir\") pod \"csi-node-driver-k779c\" (UID: \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\") " pod="calico-system/csi-node-driver-k779c" Feb 8 23:22:33.522126 kubelet[2143]: E0208 23:22:33.522115 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522150 kubelet[2143]: W0208 23:22:33.522127 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522150 kubelet[2143]: E0208 23:22:33.522138 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522197 kubelet[2143]: I0208 23:22:33.522153 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbqh\" (UniqueName: \"kubernetes.io/projected/2f044ac1-9cb8-43bc-bcbe-22f291a59d64-kube-api-access-gjbqh\") pod \"csi-node-driver-k779c\" (UID: \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\") " pod="calico-system/csi-node-driver-k779c" Feb 8 23:22:33.522272 kubelet[2143]: E0208 23:22:33.522261 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522272 kubelet[2143]: W0208 23:22:33.522269 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522320 kubelet[2143]: E0208 23:22:33.522283 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522398 kubelet[2143]: E0208 23:22:33.522388 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522398 kubelet[2143]: W0208 23:22:33.522397 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522446 kubelet[2143]: E0208 23:22:33.522408 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522542 kubelet[2143]: E0208 23:22:33.522531 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522542 kubelet[2143]: W0208 23:22:33.522541 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522593 kubelet[2143]: E0208 23:22:33.522554 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522694 kubelet[2143]: E0208 23:22:33.522683 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522694 kubelet[2143]: W0208 23:22:33.522692 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522746 kubelet[2143]: E0208 23:22:33.522705 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.522930 kubelet[2143]: E0208 23:22:33.522912 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.522930 kubelet[2143]: W0208 23:22:33.522923 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.522991 kubelet[2143]: E0208 23:22:33.522936 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523054 kubelet[2143]: E0208 23:22:33.523044 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523054 kubelet[2143]: W0208 23:22:33.523052 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523135 kubelet[2143]: E0208 23:22:33.523059 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523180 kubelet[2143]: E0208 23:22:33.523170 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523180 kubelet[2143]: W0208 23:22:33.523178 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523231 kubelet[2143]: E0208 23:22:33.523186 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523329 kubelet[2143]: E0208 23:22:33.523317 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523354 kubelet[2143]: W0208 23:22:33.523329 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523354 kubelet[2143]: E0208 23:22:33.523342 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523456 kubelet[2143]: E0208 23:22:33.523445 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523456 kubelet[2143]: W0208 23:22:33.523453 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523531 kubelet[2143]: E0208 23:22:33.523461 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523646 kubelet[2143]: E0208 23:22:33.523631 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523646 kubelet[2143]: W0208 23:22:33.523645 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523709 kubelet[2143]: E0208 23:22:33.523659 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523833 kubelet[2143]: E0208 23:22:33.523819 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523833 kubelet[2143]: W0208 23:22:33.523829 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523924 kubelet[2143]: E0208 23:22:33.523848 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.523962 kubelet[2143]: E0208 23:22:33.523957 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.523993 kubelet[2143]: W0208 23:22:33.523963 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.523993 kubelet[2143]: E0208 23:22:33.523972 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.524138 kubelet[2143]: E0208 23:22:33.524061 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.524138 kubelet[2143]: W0208 23:22:33.524067 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.524138 kubelet[2143]: E0208 23:22:33.524075 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.524292 kubelet[2143]: E0208 23:22:33.524176 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.524292 kubelet[2143]: W0208 23:22:33.524181 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.524292 kubelet[2143]: E0208 23:22:33.524189 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.625115 kubelet[2143]: E0208 23:22:33.625071 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.625115 kubelet[2143]: W0208 23:22:33.625092 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.625115 kubelet[2143]: E0208 23:22:33.625117 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.625376 kubelet[2143]: E0208 23:22:33.625315 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.625376 kubelet[2143]: W0208 23:22:33.625325 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.625376 kubelet[2143]: E0208 23:22:33.625349 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.625592 kubelet[2143]: E0208 23:22:33.625568 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.625592 kubelet[2143]: W0208 23:22:33.625584 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.625799 kubelet[2143]: E0208 23:22:33.625606 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.625868 kubelet[2143]: E0208 23:22:33.625851 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.625868 kubelet[2143]: W0208 23:22:33.625866 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.625941 kubelet[2143]: E0208 23:22:33.625888 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.626066 kubelet[2143]: E0208 23:22:33.626049 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.626066 kubelet[2143]: W0208 23:22:33.626065 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.626142 kubelet[2143]: E0208 23:22:33.626090 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.626300 kubelet[2143]: E0208 23:22:33.626283 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.626300 kubelet[2143]: W0208 23:22:33.626297 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.626369 kubelet[2143]: E0208 23:22:33.626318 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.626504 kubelet[2143]: E0208 23:22:33.626487 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.626504 kubelet[2143]: W0208 23:22:33.626501 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.626581 kubelet[2143]: E0208 23:22:33.626522 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.626708 kubelet[2143]: E0208 23:22:33.626691 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.626708 kubelet[2143]: W0208 23:22:33.626704 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.626803 kubelet[2143]: E0208 23:22:33.626723 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.626936 kubelet[2143]: E0208 23:22:33.626918 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.626936 kubelet[2143]: W0208 23:22:33.626932 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.627010 kubelet[2143]: E0208 23:22:33.626965 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.627110 kubelet[2143]: E0208 23:22:33.627095 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.627110 kubelet[2143]: W0208 23:22:33.627108 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.627183 kubelet[2143]: E0208 23:22:33.627149 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.627318 kubelet[2143]: E0208 23:22:33.627300 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.627318 kubelet[2143]: W0208 23:22:33.627315 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.627387 kubelet[2143]: E0208 23:22:33.627334 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.627510 kubelet[2143]: E0208 23:22:33.627494 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.627510 kubelet[2143]: W0208 23:22:33.627507 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.627581 kubelet[2143]: E0208 23:22:33.627527 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.627719 kubelet[2143]: E0208 23:22:33.627697 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.627719 kubelet[2143]: W0208 23:22:33.627711 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.627785 kubelet[2143]: E0208 23:22:33.627733 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.627948 kubelet[2143]: E0208 23:22:33.627931 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.627948 kubelet[2143]: W0208 23:22:33.627945 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.628023 kubelet[2143]: E0208 23:22:33.627963 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.628201 kubelet[2143]: E0208 23:22:33.628182 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.628201 kubelet[2143]: W0208 23:22:33.628196 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.628286 kubelet[2143]: E0208 23:22:33.628217 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.628397 kubelet[2143]: E0208 23:22:33.628380 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.628397 kubelet[2143]: W0208 23:22:33.628395 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.628476 kubelet[2143]: E0208 23:22:33.628425 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.628567 kubelet[2143]: E0208 23:22:33.628550 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.628567 kubelet[2143]: W0208 23:22:33.628564 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.628636 kubelet[2143]: E0208 23:22:33.628597 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.628741 kubelet[2143]: E0208 23:22:33.628725 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.628741 kubelet[2143]: W0208 23:22:33.628738 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.628830 kubelet[2143]: E0208 23:22:33.628781 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.628929 kubelet[2143]: E0208 23:22:33.628913 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.628929 kubelet[2143]: W0208 23:22:33.628927 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629004 kubelet[2143]: E0208 23:22:33.628961 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.629103 kubelet[2143]: E0208 23:22:33.629087 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.629103 kubelet[2143]: W0208 23:22:33.629100 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629171 kubelet[2143]: E0208 23:22:33.629118 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.629315 kubelet[2143]: E0208 23:22:33.629296 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.629315 kubelet[2143]: W0208 23:22:33.629310 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629413 kubelet[2143]: E0208 23:22:33.629328 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.629497 kubelet[2143]: E0208 23:22:33.629482 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.629497 kubelet[2143]: W0208 23:22:33.629494 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629553 kubelet[2143]: E0208 23:22:33.629512 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.629668 kubelet[2143]: E0208 23:22:33.629656 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.629693 kubelet[2143]: W0208 23:22:33.629669 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629693 kubelet[2143]: E0208 23:22:33.629685 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.629913 kubelet[2143]: E0208 23:22:33.629900 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.629942 kubelet[2143]: W0208 23:22:33.629913 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.629942 kubelet[2143]: E0208 23:22:33.629933 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.630093 kubelet[2143]: E0208 23:22:33.630081 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.630118 kubelet[2143]: W0208 23:22:33.630092 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.630118 kubelet[2143]: E0208 23:22:33.630106 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.630531 kubelet[2143]: E0208 23:22:33.630510 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.630531 kubelet[2143]: W0208 23:22:33.630523 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.630715 kubelet[2143]: E0208 23:22:33.630538 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.729667 kubelet[2143]: E0208 23:22:33.729637 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.729667 kubelet[2143]: W0208 23:22:33.729657 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.729865 kubelet[2143]: E0208 23:22:33.729677 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.730021 kubelet[2143]: E0208 23:22:33.729992 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.730021 kubelet[2143]: W0208 23:22:33.730018 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.730128 kubelet[2143]: E0208 23:22:33.730046 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.747278 kubelet[2143]: E0208 23:22:33.747246 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:33.748239 env[1196]: time="2024-02-08T23:22:33.748177342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7bb6b67-9l75w,Uid:879c6375-9937-49ab-b33a-e4f754cac508,Namespace:calico-system,Attempt:0,}" Feb 8 23:22:33.754396 kubelet[2143]: E0208 23:22:33.754363 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.754396 kubelet[2143]: W0208 23:22:33.754382 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.754396 kubelet[2143]: E0208 23:22:33.754400 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.830719 kubelet[2143]: E0208 23:22:33.830613 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.830719 kubelet[2143]: W0208 23:22:33.830636 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.830719 kubelet[2143]: E0208 23:22:33.830658 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.844745 kubelet[2143]: E0208 23:22:33.844699 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:33.845403 env[1196]: time="2024-02-08T23:22:33.845192958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zgc8j,Uid:5fc690ce-af04-4df5-9869-08171a5b32af,Namespace:calico-system,Attempt:0,}" Feb 8 23:22:33.878858 env[1196]: time="2024-02-08T23:22:33.878789282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:33.878858 env[1196]: time="2024-02-08T23:22:33.878851745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:33.879014 env[1196]: time="2024-02-08T23:22:33.878881934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:33.879064 env[1196]: time="2024-02-08T23:22:33.879024827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd95a1afeea34732b9dfc2089d6865a0065af2604906912c1bf84cd4dbe65764 pid=2671 runtime=io.containerd.runc.v2 Feb 8 23:22:33.931120 kubelet[2143]: E0208 23:22:33.931060 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.931120 kubelet[2143]: W0208 23:22:33.931075 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.931120 kubelet[2143]: E0208 23:22:33.931090 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.932033 env[1196]: time="2024-02-08T23:22:33.932000463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7bb6b67-9l75w,Uid:879c6375-9937-49ab-b33a-e4f754cac508,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd95a1afeea34732b9dfc2089d6865a0065af2604906912c1bf84cd4dbe65764\"" Feb 8 23:22:33.932997 kubelet[2143]: E0208 23:22:33.932604 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:33.933519 env[1196]: time="2024-02-08T23:22:33.933499435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 8 23:22:33.953815 kubelet[2143]: E0208 23:22:33.953788 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:33.953815 kubelet[2143]: W0208 23:22:33.953806 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:33.954261 kubelet[2143]: E0208 23:22:33.953828 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:33.987692 env[1196]: time="2024-02-08T23:22:33.987613230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:22:33.987692 env[1196]: time="2024-02-08T23:22:33.987654110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:22:33.987692 env[1196]: time="2024-02-08T23:22:33.987663709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:22:33.987929 env[1196]: time="2024-02-08T23:22:33.987821972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa pid=2714 runtime=io.containerd.runc.v2 Feb 8 23:22:34.039480 env[1196]: time="2024-02-08T23:22:34.039443167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zgc8j,Uid:5fc690ce-af04-4df5-9869-08171a5b32af,Namespace:calico-system,Attempt:0,} returns sandbox id \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\"" Feb 8 23:22:34.040632 kubelet[2143]: E0208 23:22:34.040246 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:34.077000 audit[2774]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:34.077000 audit[2774]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fffaf76a410 a2=0 a3=7fffaf76a3fc items=0 ppid=2312 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:34.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:34.080000 audit[2774]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:34.080000 audit[2774]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffaf76a410 a2=0 a3=7fffaf76a3fc items=0 ppid=2312 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:34.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:34.624474 kubelet[2143]: E0208 23:22:34.624410 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:36.623802 kubelet[2143]: E0208 23:22:36.623776 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:37.022608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320933492.mount: Deactivated successfully. Feb 8 23:22:38.624197 kubelet[2143]: E0208 23:22:38.624108 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:40.623812 kubelet[2143]: E0208 23:22:40.623762 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:42.537909 env[1196]: time="2024-02-08T23:22:42.537844814Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:42.564745 env[1196]: time="2024-02-08T23:22:42.564700145Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:42.578399 env[1196]: time="2024-02-08T23:22:42.578346495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:42.600637 env[1196]: time="2024-02-08T23:22:42.600536447Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:42.601348 env[1196]: time="2024-02-08T23:22:42.601321292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 8 23:22:42.602062 env[1196]: time="2024-02-08T23:22:42.602033747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 8 23:22:42.611847 env[1196]: time="2024-02-08T23:22:42.611797019Z" level=info msg="CreateContainer within sandbox \"dd95a1afeea34732b9dfc2089d6865a0065af2604906912c1bf84cd4dbe65764\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 8 23:22:42.624288 kubelet[2143]: E0208 23:22:42.624266 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:42.797684 env[1196]: time="2024-02-08T23:22:42.797550592Z" level=info msg="CreateContainer within sandbox \"dd95a1afeea34732b9dfc2089d6865a0065af2604906912c1bf84cd4dbe65764\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fb113b5d7a075500e98916fa01b2a11e035e473f384d830d038c9d83f1d23ac3\"" Feb 8 23:22:42.798261 env[1196]: time="2024-02-08T23:22:42.798230474Z" level=info msg="StartContainer for \"fb113b5d7a075500e98916fa01b2a11e035e473f384d830d038c9d83f1d23ac3\"" Feb 8 23:22:42.900109 env[1196]: time="2024-02-08T23:22:42.900052632Z" level=info msg="StartContainer for \"fb113b5d7a075500e98916fa01b2a11e035e473f384d830d038c9d83f1d23ac3\" returns successfully" Feb 8 23:22:43.694965 kubelet[2143]: E0208 23:22:43.694937 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:43.703075 kubelet[2143]: I0208 23:22:43.703038 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7f7bb6b67-9l75w" podStartSLOduration=-9.22337202615177e+09 pod.CreationTimestamp="2024-02-08 23:22:33 +0000 UTC" firstStartedPulling="2024-02-08 23:22:33.933285322 +0000 UTC m=+21.446704575" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:22:43.70263966 +0000 UTC m=+31.216058913" watchObservedRunningTime="2024-02-08 23:22:43.70300537 +0000 UTC m=+31.216424623" Feb 8 23:22:43.789065 kubelet[2143]: E0208 23:22:43.789021 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789065 kubelet[2143]: W0208 23:22:43.789048 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789065 kubelet[2143]: E0208 23:22:43.789072 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.789318 kubelet[2143]: E0208 23:22:43.789199 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789318 kubelet[2143]: W0208 23:22:43.789207 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789318 kubelet[2143]: E0208 23:22:43.789218 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.789420 kubelet[2143]: E0208 23:22:43.789343 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789420 kubelet[2143]: W0208 23:22:43.789351 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789420 kubelet[2143]: E0208 23:22:43.789364 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.789539 kubelet[2143]: E0208 23:22:43.789523 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789539 kubelet[2143]: W0208 23:22:43.789531 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789606 kubelet[2143]: E0208 23:22:43.789543 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.789676 kubelet[2143]: E0208 23:22:43.789659 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789676 kubelet[2143]: W0208 23:22:43.789669 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789758 kubelet[2143]: E0208 23:22:43.789681 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.789824 kubelet[2143]: E0208 23:22:43.789812 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.789824 kubelet[2143]: W0208 23:22:43.789820 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.789899 kubelet[2143]: E0208 23:22:43.789834 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790005 kubelet[2143]: E0208 23:22:43.789988 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790005 kubelet[2143]: W0208 23:22:43.789998 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790088 kubelet[2143]: E0208 23:22:43.790010 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790134 kubelet[2143]: E0208 23:22:43.790124 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790172 kubelet[2143]: W0208 23:22:43.790133 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790172 kubelet[2143]: E0208 23:22:43.790145 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790273 kubelet[2143]: E0208 23:22:43.790262 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790273 kubelet[2143]: W0208 23:22:43.790271 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790356 kubelet[2143]: E0208 23:22:43.790283 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790417 kubelet[2143]: E0208 23:22:43.790405 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790417 kubelet[2143]: W0208 23:22:43.790415 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790508 kubelet[2143]: E0208 23:22:43.790429 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790568 kubelet[2143]: E0208 23:22:43.790557 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790568 kubelet[2143]: W0208 23:22:43.790567 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790648 kubelet[2143]: E0208 23:22:43.790578 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.790703 kubelet[2143]: E0208 23:22:43.790692 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.790703 kubelet[2143]: W0208 23:22:43.790702 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.790778 kubelet[2143]: E0208 23:22:43.790714 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799067 kubelet[2143]: E0208 23:22:43.799046 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799067 kubelet[2143]: W0208 23:22:43.799060 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799160 kubelet[2143]: E0208 23:22:43.799073 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799242 kubelet[2143]: E0208 23:22:43.799226 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799242 kubelet[2143]: W0208 23:22:43.799237 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799289 kubelet[2143]: E0208 23:22:43.799252 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799405 kubelet[2143]: E0208 23:22:43.799395 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799405 kubelet[2143]: W0208 23:22:43.799405 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799484 kubelet[2143]: E0208 23:22:43.799420 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799612 kubelet[2143]: E0208 23:22:43.799597 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799612 kubelet[2143]: W0208 23:22:43.799607 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799659 kubelet[2143]: E0208 23:22:43.799622 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799797 kubelet[2143]: E0208 23:22:43.799782 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799797 kubelet[2143]: W0208 23:22:43.799794 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799875 kubelet[2143]: E0208 23:22:43.799810 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.799951 kubelet[2143]: E0208 23:22:43.799941 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.799951 kubelet[2143]: W0208 23:22:43.799950 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.799993 kubelet[2143]: E0208 23:22:43.799963 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800105 kubelet[2143]: E0208 23:22:43.800093 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800105 kubelet[2143]: W0208 23:22:43.800102 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800174 kubelet[2143]: E0208 23:22:43.800118 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800328 kubelet[2143]: E0208 23:22:43.800311 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800328 kubelet[2143]: W0208 23:22:43.800323 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800379 kubelet[2143]: E0208 23:22:43.800337 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800481 kubelet[2143]: E0208 23:22:43.800473 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800512 kubelet[2143]: W0208 23:22:43.800481 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800512 kubelet[2143]: E0208 23:22:43.800505 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800602 kubelet[2143]: E0208 23:22:43.800595 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800628 kubelet[2143]: W0208 23:22:43.800602 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800628 kubelet[2143]: E0208 23:22:43.800623 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800708 kubelet[2143]: E0208 23:22:43.800701 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800733 kubelet[2143]: W0208 23:22:43.800708 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800733 kubelet[2143]: E0208 23:22:43.800718 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800843 kubelet[2143]: E0208 23:22:43.800836 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.800869 kubelet[2143]: W0208 23:22:43.800843 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.800869 kubelet[2143]: E0208 23:22:43.800855 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.800983 kubelet[2143]: E0208 23:22:43.800977 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801008 kubelet[2143]: W0208 23:22:43.800983 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801008 kubelet[2143]: E0208 23:22:43.800993 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.801192 kubelet[2143]: E0208 23:22:43.801182 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801214 kubelet[2143]: W0208 23:22:43.801193 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801214 kubelet[2143]: E0208 23:22:43.801205 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.801330 kubelet[2143]: E0208 23:22:43.801321 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801382 kubelet[2143]: W0208 23:22:43.801331 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801382 kubelet[2143]: E0208 23:22:43.801347 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.801511 kubelet[2143]: E0208 23:22:43.801500 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801511 kubelet[2143]: W0208 23:22:43.801509 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801559 kubelet[2143]: E0208 23:22:43.801518 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.801650 kubelet[2143]: E0208 23:22:43.801643 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801677 kubelet[2143]: W0208 23:22:43.801650 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801677 kubelet[2143]: E0208 23:22:43.801657 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:43.801856 kubelet[2143]: E0208 23:22:43.801849 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:43.801884 kubelet[2143]: W0208 23:22:43.801856 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:43.801884 kubelet[2143]: E0208 23:22:43.801864 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.624383 kubelet[2143]: E0208 23:22:44.624319 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:44.696098 kubelet[2143]: I0208 23:22:44.696076 2143 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:22:44.696609 kubelet[2143]: E0208 23:22:44.696592 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:44.795753 kubelet[2143]: E0208 23:22:44.795716 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.795753 kubelet[2143]: W0208 23:22:44.795739 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.795974 kubelet[2143]: E0208 23:22:44.795786 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.795974 kubelet[2143]: E0208 23:22:44.795957 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.795974 kubelet[2143]: W0208 23:22:44.795965 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796076 kubelet[2143]: E0208 23:22:44.795977 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.796146 kubelet[2143]: E0208 23:22:44.796128 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.796146 kubelet[2143]: W0208 23:22:44.796138 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796221 kubelet[2143]: E0208 23:22:44.796153 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.796337 kubelet[2143]: E0208 23:22:44.796320 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.796337 kubelet[2143]: W0208 23:22:44.796330 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796421 kubelet[2143]: E0208 23:22:44.796342 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.796500 kubelet[2143]: E0208 23:22:44.796487 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.796500 kubelet[2143]: W0208 23:22:44.796497 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796579 kubelet[2143]: E0208 23:22:44.796509 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.796655 kubelet[2143]: E0208 23:22:44.796644 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.796655 kubelet[2143]: W0208 23:22:44.796653 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796736 kubelet[2143]: E0208 23:22:44.796665 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.796874 kubelet[2143]: E0208 23:22:44.796856 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.796874 kubelet[2143]: W0208 23:22:44.796866 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.796956 kubelet[2143]: E0208 23:22:44.796881 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.797026 kubelet[2143]: E0208 23:22:44.797015 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.797026 kubelet[2143]: W0208 23:22:44.797024 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.797106 kubelet[2143]: E0208 23:22:44.797036 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.797182 kubelet[2143]: E0208 23:22:44.797171 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.797182 kubelet[2143]: W0208 23:22:44.797180 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.797251 kubelet[2143]: E0208 23:22:44.797191 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.797341 kubelet[2143]: E0208 23:22:44.797330 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.797341 kubelet[2143]: W0208 23:22:44.797339 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.797457 kubelet[2143]: E0208 23:22:44.797352 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.797506 kubelet[2143]: E0208 23:22:44.797499 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.797545 kubelet[2143]: W0208 23:22:44.797507 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.797545 kubelet[2143]: E0208 23:22:44.797519 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.797663 kubelet[2143]: E0208 23:22:44.797651 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.797663 kubelet[2143]: W0208 23:22:44.797660 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.797735 kubelet[2143]: E0208 23:22:44.797671 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.804938 kubelet[2143]: E0208 23:22:44.804914 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805005 kubelet[2143]: W0208 23:22:44.804927 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805005 kubelet[2143]: E0208 23:22:44.804964 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.805249 kubelet[2143]: E0208 23:22:44.805238 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805249 kubelet[2143]: W0208 23:22:44.805246 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805376 kubelet[2143]: E0208 23:22:44.805259 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.805408 kubelet[2143]: E0208 23:22:44.805393 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805408 kubelet[2143]: W0208 23:22:44.805402 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805470 kubelet[2143]: E0208 23:22:44.805416 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.805585 kubelet[2143]: E0208 23:22:44.805574 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805585 kubelet[2143]: W0208 23:22:44.805582 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805663 kubelet[2143]: E0208 23:22:44.805595 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.805747 kubelet[2143]: E0208 23:22:44.805737 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805747 kubelet[2143]: W0208 23:22:44.805744 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805838 kubelet[2143]: E0208 23:22:44.805754 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.805920 kubelet[2143]: E0208 23:22:44.805909 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.805920 kubelet[2143]: W0208 23:22:44.805918 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.805993 kubelet[2143]: E0208 23:22:44.805932 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.806092 kubelet[2143]: E0208 23:22:44.806079 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.806092 kubelet[2143]: W0208 23:22:44.806088 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.806177 kubelet[2143]: E0208 23:22:44.806102 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.806378 kubelet[2143]: E0208 23:22:44.806346 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.806449 kubelet[2143]: W0208 23:22:44.806374 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.806449 kubelet[2143]: E0208 23:22:44.806416 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.806624 kubelet[2143]: E0208 23:22:44.806606 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.806624 kubelet[2143]: W0208 23:22:44.806617 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.806717 kubelet[2143]: E0208 23:22:44.806635 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.806829 kubelet[2143]: E0208 23:22:44.806811 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.806829 kubelet[2143]: W0208 23:22:44.806824 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.806933 kubelet[2143]: E0208 23:22:44.806842 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.807048 kubelet[2143]: E0208 23:22:44.807029 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.807048 kubelet[2143]: W0208 23:22:44.807043 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.807135 kubelet[2143]: E0208 23:22:44.807069 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.807356 kubelet[2143]: E0208 23:22:44.807339 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.807356 kubelet[2143]: W0208 23:22:44.807352 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.807602 kubelet[2143]: E0208 23:22:44.807586 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.807602 kubelet[2143]: W0208 23:22:44.807596 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.807602 kubelet[2143]: E0208 23:22:44.807606 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.807727 kubelet[2143]: E0208 23:22:44.807701 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.807727 kubelet[2143]: W0208 23:22:44.807707 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.807727 kubelet[2143]: E0208 23:22:44.807717 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.807912 kubelet[2143]: E0208 23:22:44.807895 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.807912 kubelet[2143]: W0208 23:22:44.807905 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.808004 kubelet[2143]: E0208 23:22:44.807914 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.808201 kubelet[2143]: E0208 23:22:44.808185 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.808201 kubelet[2143]: W0208 23:22:44.808195 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.808201 kubelet[2143]: E0208 23:22:44.808204 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.808331 kubelet[2143]: E0208 23:22:44.808224 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.811113 kubelet[2143]: E0208 23:22:44.811089 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.811113 kubelet[2143]: W0208 23:22:44.811108 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.811260 kubelet[2143]: E0208 23:22:44.811129 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:44.811333 kubelet[2143]: E0208 23:22:44.811317 2143 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:22:44.811333 kubelet[2143]: W0208 23:22:44.811332 2143 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:22:44.811410 kubelet[2143]: E0208 23:22:44.811347 2143 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:22:46.624365 kubelet[2143]: E0208 23:22:46.624283 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:48.055314 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:34822.service. Feb 8 23:22:48.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.76:22-10.0.0.1:34822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:48.056486 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 8 23:22:48.056557 kernel: audit: type=1130 audit(1707434568.054:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.76:22-10.0.0.1:34822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:48.090000 audit[2899]: USER_ACCT pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.091209 sshd[2899]: Accepted publickey for core from 10.0.0.1 port 34822 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:22:48.093227 sshd[2899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:22:48.092000 audit[2899]: CRED_ACQ pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.096976 systemd-logind[1177]: New session 8 of user core. Feb 8 23:22:48.097575 kernel: audit: type=1101 audit(1707434568.090:278): pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.097656 kernel: audit: type=1103 audit(1707434568.092:279): pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.097968 systemd[1]: Started session-8.scope. Feb 8 23:22:48.092000 audit[2899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe340660c0 a2=3 a3=0 items=0 ppid=1 pid=2899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:48.103474 kernel: audit: type=1006 audit(1707434568.092:280): pid=2899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 8 23:22:48.103523 kernel: audit: type=1300 audit(1707434568.092:280): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe340660c0 a2=3 a3=0 items=0 ppid=1 pid=2899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:48.103563 kernel: audit: type=1327 audit(1707434568.092:280): proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:48.092000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:48.102000 audit[2899]: USER_START pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.108205 kernel: audit: type=1105 audit(1707434568.102:281): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.108268 kernel: audit: type=1103 audit(1707434568.103:282): pid=2902 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.103000 audit[2902]: CRED_ACQ pid=2902 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.243964 sshd[2899]: pam_unix(sshd:session): session closed for user core Feb 8 23:22:48.244000 audit[2899]: USER_END pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.245969 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:34822.service: Deactivated successfully. Feb 8 23:22:48.246838 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:22:48.246928 systemd-logind[1177]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:22:48.247778 systemd-logind[1177]: Removed session 8. Feb 8 23:22:48.244000 audit[2899]: CRED_DISP pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.251667 kernel: audit: type=1106 audit(1707434568.244:283): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.251720 kernel: audit: type=1104 audit(1707434568.244:284): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:48.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.76:22-10.0.0.1:34822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:48.464266 env[1196]: time="2024-02-08T23:22:48.464205350Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:48.495960 env[1196]: time="2024-02-08T23:22:48.495908927Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:48.513599 env[1196]: time="2024-02-08T23:22:48.513564794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:48.533341 env[1196]: time="2024-02-08T23:22:48.533292892Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:22:48.534155 env[1196]: time="2024-02-08T23:22:48.534120901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 8 23:22:48.535944 env[1196]: time="2024-02-08T23:22:48.535915427Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 8 23:22:48.624318 kubelet[2143]: E0208 23:22:48.624286 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:48.675500 env[1196]: time="2024-02-08T23:22:48.675418095Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa\"" Feb 8 23:22:48.676087 env[1196]: time="2024-02-08T23:22:48.676020909Z" level=info msg="StartContainer for \"df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa\"" Feb 8 23:22:48.784258 env[1196]: time="2024-02-08T23:22:48.784121894Z" level=info msg="StartContainer for \"df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa\" returns successfully" Feb 8 23:22:48.799330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa-rootfs.mount: Deactivated successfully. Feb 8 23:22:48.940175 env[1196]: time="2024-02-08T23:22:48.940107804Z" level=info msg="shim disconnected" id=df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa Feb 8 23:22:48.940175 env[1196]: time="2024-02-08T23:22:48.940153081Z" level=warning msg="cleaning up after shim disconnected" id=df576d08bf8f4ac6b7f9bb101b164a4095ddb8058749b8c4d9726289c4811aaa namespace=k8s.io Feb 8 23:22:48.940175 env[1196]: time="2024-02-08T23:22:48.940163602Z" level=info msg="cleaning up dead shim" Feb 8 23:22:48.947788 env[1196]: time="2024-02-08T23:22:48.947757477Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:22:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Feb 8 23:22:49.705017 kubelet[2143]: E0208 23:22:49.704942 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:49.712797 env[1196]: time="2024-02-08T23:22:49.709614102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 8 23:22:50.445548 kubelet[2143]: I0208 23:22:50.445502 2143 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:22:50.446077 kubelet[2143]: E0208 23:22:50.446053 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:50.522000 audit[3015]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:50.522000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fffb63c9fd0 a2=0 a3=7fffb63c9fbc items=0 ppid=2312 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:50.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:50.523000 audit[3015]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:22:50.523000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fffb63c9fd0 a2=0 a3=7fffb63c9fbc items=0 ppid=2312 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:50.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:22:50.623520 kubelet[2143]: E0208 23:22:50.623471 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:50.706978 kubelet[2143]: E0208 23:22:50.706871 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:22:52.624298 kubelet[2143]: E0208 23:22:52.624263 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:53.247848 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:34836.service. Feb 8 23:22:53.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.76:22-10.0.0.1:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:53.248889 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 8 23:22:53.248967 kernel: audit: type=1130 audit(1707434573.247:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.76:22-10.0.0.1:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:53.424000 audit[3016]: USER_ACCT pid=3016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.425408 sshd[3016]: Accepted publickey for core from 10.0.0.1 port 34836 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:22:53.427000 audit[3016]: CRED_ACQ pid=3016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.428430 sshd[3016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:22:53.430650 kernel: audit: type=1101 audit(1707434573.424:289): pid=3016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.430780 kernel: audit: type=1103 audit(1707434573.427:290): pid=3016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.430803 kernel: audit: type=1006 audit(1707434573.427:291): pid=3016 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 8 23:22:53.432373 kernel: audit: type=1300 audit(1707434573.427:291): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc00beb480 a2=3 a3=0 items=0 ppid=1 pid=3016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:53.427000 audit[3016]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc00beb480 a2=3 a3=0 items=0 ppid=1 pid=3016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:53.432007 systemd-logind[1177]: New session 9 of user core. Feb 8 23:22:53.432883 systemd[1]: Started session-9.scope. Feb 8 23:22:53.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:53.437049 kernel: audit: type=1327 audit(1707434573.427:291): proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:53.437000 audit[3016]: USER_START pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.439000 audit[3019]: CRED_ACQ pid=3019 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.443836 kernel: audit: type=1105 audit(1707434573.437:292): pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.443891 kernel: audit: type=1103 audit(1707434573.439:293): pid=3019 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.589030 sshd[3016]: pam_unix(sshd:session): session closed for user core Feb 8 23:22:53.589000 audit[3016]: USER_END pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.591452 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:34836.service: Deactivated successfully. Feb 8 23:22:53.592168 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:22:53.593112 systemd-logind[1177]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:22:53.593985 systemd-logind[1177]: Removed session 9. Feb 8 23:22:53.589000 audit[3016]: CRED_DISP pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.614796 kernel: audit: type=1106 audit(1707434573.589:294): pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.614849 kernel: audit: type=1104 audit(1707434573.589:295): pid=3016 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:53.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.76:22-10.0.0.1:34836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:54.623605 kubelet[2143]: E0208 23:22:54.623577 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:56.623958 kubelet[2143]: E0208 23:22:56.623556 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:58.591754 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:59236.service. Feb 8 23:22:58.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.76:22-10.0.0.1:59236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:58.593164 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:22:58.593306 kernel: audit: type=1130 audit(1707434578.591:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.76:22-10.0.0.1:59236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:22:58.624192 kubelet[2143]: E0208 23:22:58.624167 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:22:58.624000 audit[3035]: USER_ACCT pid=3035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.627132 sshd[3035]: Accepted publickey for core from 10.0.0.1 port 59236 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:22:58.627000 audit[3035]: CRED_ACQ pid=3035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.628893 kernel: audit: type=1101 audit(1707434578.624:298): pid=3035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.628928 kernel: audit: type=1103 audit(1707434578.627:299): pid=3035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.629067 sshd[3035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:22:58.633294 kernel: audit: type=1006 audit(1707434578.627:300): pid=3035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 8 23:22:58.633351 kernel: audit: type=1300 audit(1707434578.627:300): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb4788c10 a2=3 a3=0 items=0 ppid=1 pid=3035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:58.627000 audit[3035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb4788c10 a2=3 a3=0 items=0 ppid=1 pid=3035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:22:58.633528 systemd-logind[1177]: New session 10 of user core. Feb 8 23:22:58.633718 systemd[1]: Started session-10.scope. Feb 8 23:22:58.627000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:58.642390 kernel: audit: type=1327 audit(1707434578.627:300): proctitle=737368643A20636F7265205B707269765D Feb 8 23:22:58.642440 kernel: audit: type=1105 audit(1707434578.638:301): pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.638000 audit[3035]: USER_START pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.640000 audit[3038]: CRED_ACQ pid=3038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.647497 kernel: audit: type=1103 audit(1707434578.640:302): pid=3038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.743243 sshd[3035]: pam_unix(sshd:session): session closed for user core Feb 8 23:22:58.743000 audit[3035]: USER_END pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.745988 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:59236.service: Deactivated successfully. Feb 8 23:22:58.746654 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:22:58.747449 systemd-logind[1177]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:22:58.751113 kernel: audit: type=1106 audit(1707434578.743:303): pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.751264 kernel: audit: type=1104 audit(1707434578.743:304): pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.743000 audit[3035]: CRED_DISP pid=3035 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:22:58.748385 systemd-logind[1177]: Removed session 10. Feb 8 23:22:58.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.76:22-10.0.0.1:59236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:00.507779 env[1196]: time="2024-02-08T23:23:00.507704760Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:00.551465 env[1196]: time="2024-02-08T23:23:00.551412753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:00.570016 env[1196]: time="2024-02-08T23:23:00.569956030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:00.596055 env[1196]: time="2024-02-08T23:23:00.595997477Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:00.597069 env[1196]: time="2024-02-08T23:23:00.597019284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 8 23:23:00.599144 env[1196]: time="2024-02-08T23:23:00.599115021Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 8 23:23:00.623996 kubelet[2143]: E0208 23:23:00.623947 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:23:00.814854 env[1196]: time="2024-02-08T23:23:00.814717619Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1\"" Feb 8 23:23:00.815429 env[1196]: time="2024-02-08T23:23:00.815385248Z" level=info msg="StartContainer for \"b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1\"" Feb 8 23:23:00.979322 env[1196]: time="2024-02-08T23:23:00.979244694Z" level=info msg="StartContainer for \"b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1\" returns successfully" Feb 8 23:23:01.726524 kubelet[2143]: E0208 23:23:01.726494 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:02.623941 kubelet[2143]: E0208 23:23:02.623893 2143 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:23:02.727612 kubelet[2143]: E0208 23:23:02.727568 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:03.483728 env[1196]: time="2024-02-08T23:23:03.483670995Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Feb 8 23:23:03.499782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1-rootfs.mount: Deactivated successfully. Feb 8 23:23:03.500495 kubelet[2143]: I0208 23:23:03.500475 2143 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:23:03.617971 kubelet[2143]: I0208 23:23:03.617925 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:23:03.618569 kubelet[2143]: I0208 23:23:03.618529 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:23:03.728508 kubelet[2143]: I0208 23:23:03.728470 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlqz\" (UniqueName: \"kubernetes.io/projected/e684c1fb-85af-423e-8c6b-15288ce2126a-kube-api-access-4dlqz\") pod \"calico-kube-controllers-c96d6f8c9-hs9cp\" (UID: \"e684c1fb-85af-423e-8c6b-15288ce2126a\") " pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" Feb 8 23:23:03.728508 kubelet[2143]: I0208 23:23:03.728523 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e684c1fb-85af-423e-8c6b-15288ce2126a-tigera-ca-bundle\") pod \"calico-kube-controllers-c96d6f8c9-hs9cp\" (UID: \"e684c1fb-85af-423e-8c6b-15288ce2126a\") " pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" Feb 8 23:23:03.728947 kubelet[2143]: I0208 23:23:03.728547 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53d36018-7aad-443b-be14-946096d7c23e-config-volume\") pod \"coredns-787d4945fb-bvd5v\" (UID: \"53d36018-7aad-443b-be14-946096d7c23e\") " pod="kube-system/coredns-787d4945fb-bvd5v" Feb 8 23:23:03.728947 kubelet[2143]: I0208 23:23:03.728625 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwh8\" (UniqueName: \"kubernetes.io/projected/53d36018-7aad-443b-be14-946096d7c23e-kube-api-access-9nwh8\") pod \"coredns-787d4945fb-bvd5v\" (UID: \"53d36018-7aad-443b-be14-946096d7c23e\") " pod="kube-system/coredns-787d4945fb-bvd5v" Feb 8 23:23:03.746710 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:59242.service. Feb 8 23:23:03.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.76:22-10.0.0.1:59242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:03.760586 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:03.760641 kernel: audit: type=1130 audit(1707434583.745:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.76:22-10.0.0.1:59242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:03.849890 kubelet[2143]: I0208 23:23:03.849858 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:23:03.929962 kubelet[2143]: I0208 23:23:03.929921 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04f8b771-04a3-4156-98db-f84147d5ca2e-config-volume\") pod \"coredns-787d4945fb-v9kwz\" (UID: \"04f8b771-04a3-4156-98db-f84147d5ca2e\") " pod="kube-system/coredns-787d4945fb-v9kwz" Feb 8 23:23:03.929962 kubelet[2143]: I0208 23:23:03.929972 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx5k\" (UniqueName: \"kubernetes.io/projected/04f8b771-04a3-4156-98db-f84147d5ca2e-kube-api-access-qhx5k\") pod \"coredns-787d4945fb-v9kwz\" (UID: \"04f8b771-04a3-4156-98db-f84147d5ca2e\") " pod="kube-system/coredns-787d4945fb-v9kwz" Feb 8 23:23:03.934000 audit[3105]: USER_ACCT pid=3105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.936655 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 59242 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:03.940243 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:03.942650 kernel: audit: type=1101 audit(1707434583.934:307): pid=3105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.942715 kernel: audit: type=1103 audit(1707434583.935:308): pid=3105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.935000 audit[3105]: CRED_ACQ pid=3105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.947339 systemd-logind[1177]: New session 11 of user core. Feb 8 23:23:03.948170 systemd[1]: Started session-11.scope. Feb 8 23:23:03.962561 kernel: audit: type=1006 audit(1707434583.935:309): pid=3105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 8 23:23:03.962618 kernel: audit: type=1300 audit(1707434583.935:309): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeac6ef4b0 a2=3 a3=0 items=0 ppid=1 pid=3105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:03.935000 audit[3105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeac6ef4b0 a2=3 a3=0 items=0 ppid=1 pid=3105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:03.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:03.973847 kernel: audit: type=1327 audit(1707434583.935:309): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:03.973901 kernel: audit: type=1105 audit(1707434583.950:310): pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.950000 audit[3105]: USER_START pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:03.974475 env[1196]: time="2024-02-08T23:23:03.974435594Z" level=info msg="shim disconnected" id=b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1 Feb 8 23:23:03.974552 env[1196]: time="2024-02-08T23:23:03.974481832Z" level=warning msg="cleaning up after shim disconnected" id=b434fdc8190b3ccaf7498af9c14158bca54bd0ace377582aed36f1930740f2f1 namespace=k8s.io Feb 8 23:23:03.974552 env[1196]: time="2024-02-08T23:23:03.974492502Z" level=info msg="cleaning up dead shim" Feb 8 23:23:03.980654 env[1196]: time="2024-02-08T23:23:03.980631343Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3111 runtime=io.containerd.runc.v2\n" Feb 8 23:23:03.951000 audit[3110]: CRED_ACQ pid=3110 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.013821 kernel: audit: type=1103 audit(1707434583.951:311): pid=3110 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.085801 sshd[3105]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:04.084000 audit[3105]: USER_END pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.088490 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:59242.service: Deactivated successfully. Feb 8 23:23:04.089190 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:23:04.092437 kernel: audit: type=1106 audit(1707434584.084:312): pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.092501 kernel: audit: type=1104 audit(1707434584.085:313): pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.085000 audit[3105]: CRED_DISP pid=3105 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:04.090234 systemd-logind[1177]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:23:04.092264 systemd-logind[1177]: Removed session 11. Feb 8 23:23:04.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.76:22-10.0.0.1:59242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.153336 kubelet[2143]: E0208 23:23:04.153307 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:04.153799 env[1196]: time="2024-02-08T23:23:04.153757695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9kwz,Uid:04f8b771-04a3-4156-98db-f84147d5ca2e,Namespace:kube-system,Attempt:0,}" Feb 8 23:23:04.220655 env[1196]: time="2024-02-08T23:23:04.220607297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96d6f8c9-hs9cp,Uid:e684c1fb-85af-423e-8c6b-15288ce2126a,Namespace:calico-system,Attempt:0,}" Feb 8 23:23:04.223927 kubelet[2143]: E0208 23:23:04.223876 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:04.224390 env[1196]: time="2024-02-08T23:23:04.224355700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bvd5v,Uid:53d36018-7aad-443b-be14-946096d7c23e,Namespace:kube-system,Attempt:0,}" Feb 8 23:23:04.627329 env[1196]: time="2024-02-08T23:23:04.627287454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k779c,Uid:2f044ac1-9cb8-43bc-bcbe-22f291a59d64,Namespace:calico-system,Attempt:0,}" Feb 8 23:23:04.658600 env[1196]: time="2024-02-08T23:23:04.658522364Z" level=error msg="Failed to destroy network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.659895 env[1196]: time="2024-02-08T23:23:04.658869939Z" level=error msg="encountered an error cleaning up failed sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.659895 env[1196]: time="2024-02-08T23:23:04.658908574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9kwz,Uid:04f8b771-04a3-4156-98db-f84147d5ca2e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.659996 kubelet[2143]: E0208 23:23:04.659467 2143 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.659996 kubelet[2143]: E0208 23:23:04.659523 2143 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-v9kwz" Feb 8 23:23:04.659996 kubelet[2143]: E0208 23:23:04.659545 2143 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-v9kwz" Feb 8 23:23:04.660084 kubelet[2143]: E0208 23:23:04.659593 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-v9kwz_kube-system(04f8b771-04a3-4156-98db-f84147d5ca2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-v9kwz_kube-system(04f8b771-04a3-4156-98db-f84147d5ca2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-v9kwz" podUID=04f8b771-04a3-4156-98db-f84147d5ca2e Feb 8 23:23:04.660488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb-shm.mount: Deactivated successfully. Feb 8 23:23:04.711728 env[1196]: time="2024-02-08T23:23:04.711668583Z" level=error msg="Failed to destroy network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.712042 env[1196]: time="2024-02-08T23:23:04.712013953Z" level=error msg="encountered an error cleaning up failed sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.712084 env[1196]: time="2024-02-08T23:23:04.712058699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bvd5v,Uid:53d36018-7aad-443b-be14-946096d7c23e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.712324 kubelet[2143]: E0208 23:23:04.712299 2143 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.712398 kubelet[2143]: E0208 23:23:04.712355 2143 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-bvd5v" Feb 8 23:23:04.712398 kubelet[2143]: E0208 23:23:04.712374 2143 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-bvd5v" Feb 8 23:23:04.712467 kubelet[2143]: E0208 23:23:04.712424 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-bvd5v_kube-system(53d36018-7aad-443b-be14-946096d7c23e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-bvd5v_kube-system(53d36018-7aad-443b-be14-946096d7c23e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-bvd5v" podUID=53d36018-7aad-443b-be14-946096d7c23e Feb 8 23:23:04.732705 kubelet[2143]: E0208 23:23:04.732676 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:04.734569 kubelet[2143]: I0208 23:23:04.734546 2143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:04.734631 env[1196]: time="2024-02-08T23:23:04.734539657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 8 23:23:04.735048 env[1196]: time="2024-02-08T23:23:04.735018925Z" level=info msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" Feb 8 23:23:04.735911 kubelet[2143]: I0208 23:23:04.735888 2143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:04.736335 env[1196]: time="2024-02-08T23:23:04.736315606Z" level=info msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" Feb 8 23:23:04.767934 env[1196]: time="2024-02-08T23:23:04.767875628Z" level=error msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" failed" error="failed to destroy network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.768426 kubelet[2143]: E0208 23:23:04.768387 2143 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:04.768506 kubelet[2143]: E0208 23:23:04.768463 2143 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69} Feb 8 23:23:04.768506 kubelet[2143]: E0208 23:23:04.768495 2143 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53d36018-7aad-443b-be14-946096d7c23e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:23:04.768601 kubelet[2143]: E0208 23:23:04.768532 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53d36018-7aad-443b-be14-946096d7c23e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-bvd5v" podUID=53d36018-7aad-443b-be14-946096d7c23e Feb 8 23:23:04.773327 env[1196]: time="2024-02-08T23:23:04.773291132Z" level=error msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" failed" error="failed to destroy network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.773650 kubelet[2143]: E0208 23:23:04.773631 2143 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:04.773716 kubelet[2143]: E0208 23:23:04.773659 2143 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb} Feb 8 23:23:04.773716 kubelet[2143]: E0208 23:23:04.773691 2143 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04f8b771-04a3-4156-98db-f84147d5ca2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:23:04.773716 kubelet[2143]: E0208 23:23:04.773714 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04f8b771-04a3-4156-98db-f84147d5ca2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-v9kwz" podUID=04f8b771-04a3-4156-98db-f84147d5ca2e Feb 8 23:23:04.840698 env[1196]: time="2024-02-08T23:23:04.840635169Z" level=error msg="Failed to destroy network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.841004 env[1196]: time="2024-02-08T23:23:04.840960992Z" level=error msg="encountered an error cleaning up failed sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.841056 env[1196]: time="2024-02-08T23:23:04.841014554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96d6f8c9-hs9cp,Uid:e684c1fb-85af-423e-8c6b-15288ce2126a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.841265 kubelet[2143]: E0208 23:23:04.841241 2143 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.841330 kubelet[2143]: E0208 23:23:04.841292 2143 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" Feb 8 23:23:04.841330 kubelet[2143]: E0208 23:23:04.841314 2143 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" Feb 8 23:23:04.841401 kubelet[2143]: E0208 23:23:04.841366 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c96d6f8c9-hs9cp_calico-system(e684c1fb-85af-423e-8c6b-15288ce2126a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c96d6f8c9-hs9cp_calico-system(e684c1fb-85af-423e-8c6b-15288ce2126a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" podUID=e684c1fb-85af-423e-8c6b-15288ce2126a Feb 8 23:23:04.953547 env[1196]: time="2024-02-08T23:23:04.953423596Z" level=error msg="Failed to destroy network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.954123 env[1196]: time="2024-02-08T23:23:04.954096604Z" level=error msg="encountered an error cleaning up failed sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.954297 env[1196]: time="2024-02-08T23:23:04.954261239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k779c,Uid:2f044ac1-9cb8-43bc-bcbe-22f291a59d64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.954665 kubelet[2143]: E0208 23:23:04.954632 2143 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:04.954736 kubelet[2143]: E0208 23:23:04.954688 2143 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k779c" Feb 8 23:23:04.954736 kubelet[2143]: E0208 23:23:04.954708 2143 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k779c" Feb 8 23:23:04.954805 kubelet[2143]: E0208 23:23:04.954759 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k779c_calico-system(2f044ac1-9cb8-43bc-bcbe-22f291a59d64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k779c_calico-system(2f044ac1-9cb8-43bc-bcbe-22f291a59d64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:23:05.500700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8-shm.mount: Deactivated successfully. Feb 8 23:23:05.500857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69-shm.mount: Deactivated successfully. Feb 8 23:23:05.737933 kubelet[2143]: I0208 23:23:05.737888 2143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:05.739059 env[1196]: time="2024-02-08T23:23:05.739015215Z" level=info msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" Feb 8 23:23:05.740528 kubelet[2143]: I0208 23:23:05.740500 2143 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:05.742834 env[1196]: time="2024-02-08T23:23:05.740931741Z" level=info msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" Feb 8 23:23:05.781675 env[1196]: time="2024-02-08T23:23:05.781279244Z" level=error msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" failed" error="failed to destroy network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:05.783025 kubelet[2143]: E0208 23:23:05.782996 2143 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:05.783097 kubelet[2143]: E0208 23:23:05.783035 2143 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8} Feb 8 23:23:05.784278 kubelet[2143]: E0208 23:23:05.783065 2143 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e684c1fb-85af-423e-8c6b-15288ce2126a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:23:05.784278 kubelet[2143]: E0208 23:23:05.783163 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e684c1fb-85af-423e-8c6b-15288ce2126a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" podUID=e684c1fb-85af-423e-8c6b-15288ce2126a Feb 8 23:23:05.785013 env[1196]: time="2024-02-08T23:23:05.784972560Z" level=error msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" failed" error="failed to destroy network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:23:05.785210 kubelet[2143]: E0208 23:23:05.785186 2143 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:05.785210 kubelet[2143]: E0208 23:23:05.785207 2143 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f} Feb 8 23:23:05.785285 kubelet[2143]: E0208 23:23:05.785232 2143 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:23:05.785285 kubelet[2143]: E0208 23:23:05.785254 2143 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f044ac1-9cb8-43bc-bcbe-22f291a59d64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k779c" podUID=2f044ac1-9cb8-43bc-bcbe-22f291a59d64 Feb 8 23:23:09.088577 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:42532.service. Feb 8 23:23:09.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.76:22-10.0.0.1:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:09.092117 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:09.092243 kernel: audit: type=1130 audit(1707434589.087:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.76:22-10.0.0.1:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:09.125000 audit[3377]: USER_ACCT pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.126584 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 42532 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:09.128000 audit[3377]: CRED_ACQ pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.129549 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:09.132396 kernel: audit: type=1101 audit(1707434589.125:316): pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.132453 kernel: audit: type=1103 audit(1707434589.128:317): pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.132480 kernel: audit: type=1006 audit(1707434589.128:318): pid=3377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 8 23:23:09.134602 kernel: audit: type=1300 audit(1707434589.128:318): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7442f310 a2=3 a3=0 items=0 ppid=1 pid=3377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:09.128000 audit[3377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7442f310 a2=3 a3=0 items=0 ppid=1 pid=3377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:09.134833 systemd-logind[1177]: New session 12 of user core. Feb 8 23:23:09.134855 systemd[1]: Started session-12.scope. Feb 8 23:23:09.128000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:09.139803 kernel: audit: type=1327 audit(1707434589.128:318): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:09.140000 audit[3377]: USER_START pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.141000 audit[3380]: CRED_ACQ pid=3380 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.148068 kernel: audit: type=1105 audit(1707434589.140:319): pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.148187 kernel: audit: type=1103 audit(1707434589.141:320): pid=3380 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.249739 sshd[3377]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:09.256746 kernel: audit: type=1106 audit(1707434589.250:321): pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.256858 kernel: audit: type=1104 audit(1707434589.250:322): pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.250000 audit[3377]: USER_END pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.250000 audit[3377]: CRED_DISP pid=3377 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:09.252862 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:42532.service: Deactivated successfully. Feb 8 23:23:09.253803 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:23:09.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.76:22-10.0.0.1:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:09.257746 systemd-logind[1177]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:23:09.259020 systemd-logind[1177]: Removed session 12. Feb 8 23:23:10.841274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765508527.mount: Deactivated successfully. Feb 8 23:23:11.223998 env[1196]: time="2024-02-08T23:23:11.223920165Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:11.241665 env[1196]: time="2024-02-08T23:23:11.241528237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:11.248949 env[1196]: time="2024-02-08T23:23:11.248897322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:11.260575 env[1196]: time="2024-02-08T23:23:11.259231501Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:11.260575 env[1196]: time="2024-02-08T23:23:11.259511095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 8 23:23:11.275691 env[1196]: time="2024-02-08T23:23:11.275635273Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 8 23:23:11.458627 env[1196]: time="2024-02-08T23:23:11.454197930Z" level=info msg="CreateContainer within sandbox \"3aa7117f66795dbd0b18941838f1ee029f69bb52d1c5e9300ea8d33f6f395daa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8dd4da86cdae7c958de8aa453b40157381680aaaeec85c1aa106ae7ed550e7f2\"" Feb 8 23:23:11.458627 env[1196]: time="2024-02-08T23:23:11.454886084Z" level=info msg="StartContainer for \"8dd4da86cdae7c958de8aa453b40157381680aaaeec85c1aa106ae7ed550e7f2\"" Feb 8 23:23:11.544899 env[1196]: time="2024-02-08T23:23:11.544615699Z" level=info msg="StartContainer for \"8dd4da86cdae7c958de8aa453b40157381680aaaeec85c1aa106ae7ed550e7f2\" returns successfully" Feb 8 23:23:11.764274 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 8 23:23:11.764400 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 8 23:23:11.768516 kubelet[2143]: E0208 23:23:11.768484 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:11.799029 kubelet[2143]: I0208 23:23:11.798140 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zgc8j" podStartSLOduration=-9.223371998056679e+09 pod.CreationTimestamp="2024-02-08 23:22:33 +0000 UTC" firstStartedPulling="2024-02-08 23:22:34.041008102 +0000 UTC m=+21.554427355" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:11.797590605 +0000 UTC m=+59.311009858" watchObservedRunningTime="2024-02-08 23:23:11.798096913 +0000 UTC m=+59.311516156" Feb 8 23:23:12.366592 systemd[1]: run-containerd-runc-k8s.io-8dd4da86cdae7c958de8aa453b40157381680aaaeec85c1aa106ae7ed550e7f2-runc.lX7YGr.mount: Deactivated successfully. Feb 8 23:23:12.773684 kubelet[2143]: E0208 23:23:12.773607 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:12.842112 systemd[1]: run-containerd-runc-k8s.io-8dd4da86cdae7c958de8aa453b40157381680aaaeec85c1aa106ae7ed550e7f2-runc.NxR3Ro.mount: Deactivated successfully. Feb 8 23:23:13.111000 audit[3603]: AVC avc: denied { write } for pid=3603 comm="tee" name="fd" dev="proc" ino=25955 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.111000 audit[3603]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc453b698c a2=241 a3=1b6 items=1 ppid=3562 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.111000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 8 23:23:13.111000 audit: PATH item=0 name="/dev/fd/63" inode=25946 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.118000 audit[3596]: AVC avc: denied { write } for pid=3596 comm="tee" name="fd" dev="proc" ino=25094 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.118000 audit[3596]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb320d97b a2=241 a3=1b6 items=1 ppid=3548 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.118000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 8 23:23:13.118000 audit: PATH item=0 name="/dev/fd/63" inode=25937 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.136000 audit[3611]: AVC avc: denied { write } for pid=3611 comm="tee" name="fd" dev="proc" ino=26839 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.136000 audit[3611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2803698b a2=241 a3=1b6 items=1 ppid=3549 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.136000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 8 23:23:13.136000 audit: PATH item=0 name="/dev/fd/63" inode=25957 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.143000 audit[3613]: AVC avc: denied { write } for pid=3613 comm="tee" name="fd" dev="proc" ino=26843 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.143000 audit[3613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8038398b a2=241 a3=1b6 items=1 ppid=3557 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.143000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 8 23:23:13.143000 audit: PATH item=0 name="/dev/fd/63" inode=25958 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.149000 audit[3622]: AVC avc: denied { write } for pid=3622 comm="tee" name="fd" dev="proc" ino=27798 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.149000 audit[3622]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1e09597c a2=241 a3=1b6 items=1 ppid=3554 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.149000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 8 23:23:13.149000 audit: PATH item=0 name="/dev/fd/63" inode=27786 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.158000 audit[3629]: AVC avc: denied { write } for pid=3629 comm="tee" name="fd" dev="proc" ino=25102 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.158000 audit[3629]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2995c98b a2=241 a3=1b6 items=1 ppid=3582 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.158000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 8 23:23:13.158000 audit: PATH item=0 name="/dev/fd/63" inode=25099 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.160000 audit[3625]: AVC avc: denied { write } for pid=3625 comm="tee" name="fd" dev="proc" ino=27802 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:23:13.160000 audit[3625]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffffd8e698d a2=241 a3=1b6 items=1 ppid=3546 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.160000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 8 23:23:13.160000 audit: PATH item=0 name="/dev/fd/63" inode=25967 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:13.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit: BPF prog-id=10 op=LOAD Feb 8 23:23:13.390000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe035f04a0 a2=70 a3=7fc5d45d7000 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.390000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.390000 audit: BPF prog-id=11 op=LOAD Feb 8 23:23:13.390000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe035f04a0 a2=70 a3=6e items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.391000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe035f0450 a2=70 a3=7ffe035f04a0 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.391000 audit: BPF prog-id=12 op=LOAD Feb 8 23:23:13.391000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe035f0430 a2=70 a3=7ffe035f04a0 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.392000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:23:13.392000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.392000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe035f0510 a2=70 a3=0 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.392000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.392000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe035f0500 a2=70 a3=0 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.392000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.392000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe035f0540 a2=70 a3=0 items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { perfmon } for pid=3689 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit[3689]: AVC avc: denied { bpf } for pid=3689 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.393000 audit: BPF prog-id=13 op=LOAD Feb 8 23:23:13.393000 audit[3689]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe035f0460 a2=70 a3=ffffffff items=0 ppid=3552 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.393000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:23:13.399000 audit[3693]: AVC avc: denied { bpf } for pid=3693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.399000 audit[3693]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff6b86ff90 a2=70 a3=208 items=0 ppid=3552 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:23:13.399000 audit[3693]: AVC avc: denied { bpf } for pid=3693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:23:13.399000 audit[3693]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff6b86fe60 a2=70 a3=3 items=0 ppid=3552 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:23:13.410000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:23:13.463000 audit[3721]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:13.463000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fffee617020 a2=0 a3=7fffee61700c items=0 ppid=3552 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:13.465000 audit[3720]: NETFILTER_CFG table=raw:112 family=2 entries=19 op=nft_register_chain pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:13.465000 audit[3720]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc5f00a740 a2=0 a3=558fd05e0000 items=0 ppid=3552 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:13.469000 audit[3724]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:13.469000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffee3bc4510 a2=0 a3=555a257f8000 items=0 ppid=3552 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.469000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:13.471000 audit[3722]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:13.471000 audit[3722]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffe5bf7a840 a2=0 a3=56140da14000 items=0 ppid=3552 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:13.471000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:14.253167 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:42534.service. Feb 8 23:23:14.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.76:22-10.0.0.1:42534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:14.254032 kernel: kauditd_printk_skb: 119 callbacks suppressed Feb 8 23:23:14.254103 kernel: audit: type=1130 audit(1707434594.252:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.76:22-10.0.0.1:42534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:14.288465 systemd-networkd[1070]: vxlan.calico: Link UP Feb 8 23:23:14.288472 systemd-networkd[1070]: vxlan.calico: Gained carrier Feb 8 23:23:14.289000 audit[3730]: USER_ACCT pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.290198 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:14.290790 kernel: audit: type=1101 audit(1707434594.289:350): pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.290000 audit[3730]: CRED_ACQ pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.290000 audit[3730]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc20e5720 a2=3 a3=0 items=0 ppid=1 pid=3730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:14.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:14.291287 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:14.291937 kernel: audit: type=1103 audit(1707434594.290:351): pid=3730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.291984 kernel: audit: type=1006 audit(1707434594.290:352): pid=3730 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 8 23:23:14.292007 kernel: audit: type=1300 audit(1707434594.290:352): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc20e5720 a2=3 a3=0 items=0 ppid=1 pid=3730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:14.292030 kernel: audit: type=1327 audit(1707434594.290:352): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:14.296608 systemd[1]: Started session-13.scope. Feb 8 23:23:14.296999 systemd-logind[1177]: New session 13 of user core. Feb 8 23:23:14.303000 audit[3730]: USER_START pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.303000 audit[3735]: CRED_ACQ pid=3735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.313782 kernel: audit: type=1105 audit(1707434594.303:353): pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.313822 kernel: audit: type=1103 audit(1707434594.303:354): pid=3735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.437334 sshd[3730]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:14.437000 audit[3730]: USER_END pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.439155 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:42534.service: Deactivated successfully. Feb 8 23:23:14.440325 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:23:14.440365 systemd-logind[1177]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:23:14.441567 systemd-logind[1177]: Removed session 13. Feb 8 23:23:14.437000 audit[3730]: CRED_DISP pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.444415 kernel: audit: type=1106 audit(1707434594.437:355): pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.444473 kernel: audit: type=1104 audit(1707434594.437:356): pid=3730 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:14.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.76:22-10.0.0.1:42534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:16.221967 systemd-networkd[1070]: vxlan.calico: Gained IPv6LL Feb 8 23:23:16.624946 env[1196]: time="2024-02-08T23:23:16.624898032Z" level=info msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.713 [INFO][3766] k8s.go 578: Cleaning up netns ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.713 [INFO][3766] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" iface="eth0" netns="/var/run/netns/cni-530a0f41-7cc3-cdaa-c580-65450bd34878" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.713 [INFO][3766] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" iface="eth0" netns="/var/run/netns/cni-530a0f41-7cc3-cdaa-c580-65450bd34878" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.714 [INFO][3766] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" iface="eth0" netns="/var/run/netns/cni-530a0f41-7cc3-cdaa-c580-65450bd34878" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.714 [INFO][3766] k8s.go 585: Releasing IP address(es) ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.714 [INFO][3766] utils.go 188: Calico CNI releasing IP address ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.751 [INFO][3774] ipam_plugin.go 415: Releasing address using handleID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.751 [INFO][3774] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.751 [INFO][3774] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.758 [WARNING][3774] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.758 [INFO][3774] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.759 [INFO][3774] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:16.762911 env[1196]: 2024-02-08 23:23:16.761 [INFO][3766] k8s.go 591: Teardown processing complete. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:23:16.763415 env[1196]: time="2024-02-08T23:23:16.763094893Z" level=info msg="TearDown network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" successfully" Feb 8 23:23:16.763415 env[1196]: time="2024-02-08T23:23:16.763173133Z" level=info msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" returns successfully" Feb 8 23:23:16.764075 env[1196]: time="2024-02-08T23:23:16.764030859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96d6f8c9-hs9cp,Uid:e684c1fb-85af-423e-8c6b-15288ce2126a,Namespace:calico-system,Attempt:1,}" Feb 8 23:23:16.764902 systemd[1]: run-netns-cni\x2d530a0f41\x2d7cc3\x2dcdaa\x2dc580\x2d65450bd34878.mount: Deactivated successfully. Feb 8 23:23:16.891137 systemd-networkd[1070]: cali7f68d1edd6b: Link UP Feb 8 23:23:16.893400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:23:16.893478 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7f68d1edd6b: link becomes ready Feb 8 23:23:16.893850 systemd-networkd[1070]: cali7f68d1edd6b: Gained carrier Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.818 [INFO][3781] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0 calico-kube-controllers-c96d6f8c9- calico-system e684c1fb-85af-423e-8c6b-15288ce2126a 826 0 2024-02-08 23:22:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c96d6f8c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c96d6f8c9-hs9cp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f68d1edd6b [] []}} ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.818 [INFO][3781] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.850 [INFO][3796] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" HandleID="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.864 [INFO][3796] ipam_plugin.go 268: Auto assigning IP ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" HandleID="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000695df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c96d6f8c9-hs9cp", "timestamp":"2024-02-08 23:23:16.850984644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.864 [INFO][3796] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.864 [INFO][3796] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.864 [INFO][3796] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.866 [INFO][3796] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.870 [INFO][3796] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.874 [INFO][3796] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.875 [INFO][3796] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.877 [INFO][3796] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.877 [INFO][3796] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.879 [INFO][3796] ipam.go 1682: Creating new handle: k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0 Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.882 [INFO][3796] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.886 [INFO][3796] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.886 [INFO][3796] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" host="localhost" Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.886 [INFO][3796] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:17.058458 env[1196]: 2024-02-08 23:23:16.886 [INFO][3796] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" HandleID="k8s-pod-network.06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:16.888 [INFO][3781] k8s.go 385: Populated endpoint ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0", GenerateName:"calico-kube-controllers-c96d6f8c9-", Namespace:"calico-system", SelfLink:"", UID:"e684c1fb-85af-423e-8c6b-15288ce2126a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96d6f8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c96d6f8c9-hs9cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f68d1edd6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:16.889 [INFO][3781] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:16.889 [INFO][3781] dataplane_linux.go 68: Setting the host side veth name to cali7f68d1edd6b ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:16.894 [INFO][3781] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:16.894 [INFO][3781] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0", GenerateName:"calico-kube-controllers-c96d6f8c9-", Namespace:"calico-system", SelfLink:"", UID:"e684c1fb-85af-423e-8c6b-15288ce2126a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96d6f8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0", Pod:"calico-kube-controllers-c96d6f8c9-hs9cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f68d1edd6b", MAC:"3a:b3:9b:34:4c:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:17.059085 env[1196]: 2024-02-08 23:23:17.056 [INFO][3781] k8s.go 491: Wrote updated endpoint to datastore ContainerID="06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0" Namespace="calico-system" Pod="calico-kube-controllers-c96d6f8c9-hs9cp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:23:17.068000 audit[3819]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:17.068000 audit[3819]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff30400070 a2=0 a3=7fff3040005c items=0 ppid=3552 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:17.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:17.075928 env[1196]: time="2024-02-08T23:23:17.075859232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:17.075928 env[1196]: time="2024-02-08T23:23:17.075901733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:17.075928 env[1196]: time="2024-02-08T23:23:17.075912133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:17.076150 env[1196]: time="2024-02-08T23:23:17.076041099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0 pid=3826 runtime=io.containerd.runc.v2 Feb 8 23:23:17.102801 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:23:17.128209 env[1196]: time="2024-02-08T23:23:17.128150394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96d6f8c9-hs9cp,Uid:e684c1fb-85af-423e-8c6b-15288ce2126a,Namespace:calico-system,Attempt:1,} returns sandbox id \"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0\"" Feb 8 23:23:17.129631 env[1196]: time="2024-02-08T23:23:17.129603637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 8 23:23:17.765042 systemd[1]: run-containerd-runc-k8s.io-06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0-runc.DSyRjO.mount: Deactivated successfully. Feb 8 23:23:18.077905 systemd-networkd[1070]: cali7f68d1edd6b: Gained IPv6LL Feb 8 23:23:19.440654 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:45324.service. Feb 8 23:23:19.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.76:22-10.0.0.1:45324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:19.444784 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 8 23:23:19.444844 kernel: audit: type=1130 audit(1707434599.440:359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.76:22-10.0.0.1:45324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:19.475000 audit[3868]: USER_ACCT pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.476096 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:19.478000 audit[3868]: CRED_ACQ pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.479838 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:19.482436 kernel: audit: type=1101 audit(1707434599.475:360): pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.482495 kernel: audit: type=1103 audit(1707434599.478:361): pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.484829 kernel: audit: type=1006 audit(1707434599.478:362): pid=3868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 8 23:23:19.488534 kernel: audit: type=1300 audit(1707434599.478:362): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcff4a1310 a2=3 a3=0 items=0 ppid=1 pid=3868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:19.478000 audit[3868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcff4a1310 a2=3 a3=0 items=0 ppid=1 pid=3868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:19.485731 systemd-logind[1177]: New session 14 of user core. Feb 8 23:23:19.485860 systemd[1]: Started session-14.scope. Feb 8 23:23:19.478000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:19.491952 kernel: audit: type=1327 audit(1707434599.478:362): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:19.491000 audit[3868]: USER_START pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.492000 audit[3871]: CRED_ACQ pid=3871 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.498553 kernel: audit: type=1105 audit(1707434599.491:363): pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.498605 kernel: audit: type=1103 audit(1707434599.492:364): pid=3871 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.624085 env[1196]: time="2024-02-08T23:23:19.624022108Z" level=info msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" Feb 8 23:23:19.624085 env[1196]: time="2024-02-08T23:23:19.624022198Z" level=info msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" Feb 8 23:23:19.669688 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:19.676883 kernel: audit: type=1106 audit(1707434599.670:365): pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.677019 kernel: audit: type=1104 audit(1707434599.670:366): pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.670000 audit[3868]: USER_END pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.670000 audit[3868]: CRED_DISP pid=3868 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.672158 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:45338.service. Feb 8 23:23:19.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.76:22-10.0.0.1:45338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:19.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.76:22-10.0.0.1:45324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:19.672657 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:45324.service: Deactivated successfully. Feb 8 23:23:19.673847 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:23:19.674065 systemd-logind[1177]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:23:19.676206 systemd-logind[1177]: Removed session 14. Feb 8 23:23:19.714000 audit[3931]: USER_ACCT pid=3931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.716209 sshd[3931]: Accepted publickey for core from 10.0.0.1 port 45338 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:19.715000 audit[3931]: CRED_ACQ pid=3931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.715000 audit[3931]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7af3bf20 a2=3 a3=0 items=0 ppid=1 pid=3931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:19.715000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:19.716558 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:19.719736 systemd-logind[1177]: New session 15 of user core. Feb 8 23:23:19.720564 systemd[1]: Started session-15.scope. Feb 8 23:23:19.724000 audit[3931]: USER_START pid=3931 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.725000 audit[3943]: CRED_ACQ pid=3943 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.684 [INFO][3916] k8s.go 578: Cleaning up netns ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.685 [INFO][3916] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" iface="eth0" netns="/var/run/netns/cni-15ad4e02-49d6-74eb-1849-b2908badf394" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.685 [INFO][3916] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" iface="eth0" netns="/var/run/netns/cni-15ad4e02-49d6-74eb-1849-b2908badf394" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.685 [INFO][3916] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" iface="eth0" netns="/var/run/netns/cni-15ad4e02-49d6-74eb-1849-b2908badf394" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.685 [INFO][3916] k8s.go 585: Releasing IP address(es) ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.685 [INFO][3916] utils.go 188: Calico CNI releasing IP address ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.702 [INFO][3934] ipam_plugin.go 415: Releasing address using handleID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.702 [INFO][3934] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.702 [INFO][3934] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.742 [WARNING][3934] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.742 [INFO][3934] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.743 [INFO][3934] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:19.747369 env[1196]: 2024-02-08 23:23:19.746 [INFO][3916] k8s.go 591: Teardown processing complete. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:23:19.749412 systemd[1]: run-netns-cni\x2d15ad4e02\x2d49d6\x2d74eb\x2d1849\x2db2908badf394.mount: Deactivated successfully. Feb 8 23:23:19.750371 env[1196]: time="2024-02-08T23:23:19.750334283Z" level=info msg="TearDown network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" successfully" Feb 8 23:23:19.750421 env[1196]: time="2024-02-08T23:23:19.750370212Z" level=info msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" returns successfully" Feb 8 23:23:19.750639 kubelet[2143]: E0208 23:23:19.750622 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:19.751050 env[1196]: time="2024-02-08T23:23:19.750931743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9kwz,Uid:04f8b771-04a3-4156-98db-f84147d5ca2e,Namespace:kube-system,Attempt:1,}" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.742 [INFO][3915] k8s.go 578: Cleaning up netns ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.743 [INFO][3915] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" iface="eth0" netns="/var/run/netns/cni-796e6e26-49f9-e302-f6f3-723f49e3f58e" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.743 [INFO][3915] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" iface="eth0" netns="/var/run/netns/cni-796e6e26-49f9-e302-f6f3-723f49e3f58e" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.743 [INFO][3915] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" iface="eth0" netns="/var/run/netns/cni-796e6e26-49f9-e302-f6f3-723f49e3f58e" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.743 [INFO][3915] k8s.go 585: Releasing IP address(es) ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.743 [INFO][3915] utils.go 188: Calico CNI releasing IP address ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.770 [INFO][3944] ipam_plugin.go 415: Releasing address using handleID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.770 [INFO][3944] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.770 [INFO][3944] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.777 [WARNING][3944] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.777 [INFO][3944] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.778 [INFO][3944] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:19.789945 env[1196]: 2024-02-08 23:23:19.787 [INFO][3915] k8s.go 591: Teardown processing complete. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:23:19.790839 env[1196]: time="2024-02-08T23:23:19.790810539Z" level=info msg="TearDown network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" successfully" Feb 8 23:23:19.790948 env[1196]: time="2024-02-08T23:23:19.790928393Z" level=info msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" returns successfully" Feb 8 23:23:19.791399 kubelet[2143]: E0208 23:23:19.791252 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:19.791653 env[1196]: time="2024-02-08T23:23:19.791634680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bvd5v,Uid:53d36018-7aad-443b-be14-946096d7c23e,Namespace:kube-system,Attempt:1,}" Feb 8 23:23:19.846272 systemd[1]: run-netns-cni\x2d796e6e26\x2d49f9\x2de302\x2df6f3\x2d723f49e3f58e.mount: Deactivated successfully. Feb 8 23:23:19.917733 systemd-networkd[1070]: calie5b500c48d1: Link UP Feb 8 23:23:19.921153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:23:19.921221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie5b500c48d1: link becomes ready Feb 8 23:23:19.921008 systemd-networkd[1070]: calie5b500c48d1: Gained carrier Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.843 [INFO][3958] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--v9kwz-eth0 coredns-787d4945fb- kube-system 04f8b771-04a3-4156-98db-f84147d5ca2e 844 0 2024-02-08 23:22:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-v9kwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5b500c48d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.844 [INFO][3958] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.875 [INFO][3989] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" HandleID="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.886 [INFO][3989] ipam_plugin.go 268: Auto assigning IP ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" HandleID="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d6260), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-v9kwz", "timestamp":"2024-02-08 23:23:19.875522457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.888 [INFO][3989] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.888 [INFO][3989] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.888 [INFO][3989] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.889 [INFO][3989] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.891 [INFO][3989] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.894 [INFO][3989] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.896 [INFO][3989] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.898 [INFO][3989] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.898 [INFO][3989] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.900 [INFO][3989] ipam.go 1682: Creating new handle: k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405 Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.902 [INFO][3989] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.906 [INFO][3989] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.906 [INFO][3989] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" host="localhost" Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.906 [INFO][3989] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:19.939558 env[1196]: 2024-02-08 23:23:19.906 [INFO][3989] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" HandleID="k8s-pod-network.603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.908 [INFO][3958] k8s.go 385: Populated endpoint ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--v9kwz-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"04f8b771-04a3-4156-98db-f84147d5ca2e", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-v9kwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5b500c48d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.908 [INFO][3958] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.908 [INFO][3958] dataplane_linux.go 68: Setting the host side veth name to calie5b500c48d1 ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.921 [INFO][3958] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.922 [INFO][3958] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--v9kwz-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"04f8b771-04a3-4156-98db-f84147d5ca2e", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405", Pod:"coredns-787d4945fb-v9kwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5b500c48d1", MAC:"a2:2f:b5:a1:8c:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:19.940438 env[1196]: 2024-02-08 23:23:19.936 [INFO][3958] k8s.go 491: Wrote updated endpoint to datastore ContainerID="603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405" Namespace="kube-system" Pod="coredns-787d4945fb-v9kwz" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:23:19.952000 audit[4017]: NETFILTER_CFG table=filter:116 family=2 entries=40 op=nft_register_chain pid=4017 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:19.952000 audit[4017]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7fff50c59e10 a2=0 a3=7fff50c59dfc items=0 ppid=3552 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:19.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:19.978043 env[1196]: time="2024-02-08T23:23:19.977814028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:19.978043 env[1196]: time="2024-02-08T23:23:19.977848574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:19.978043 env[1196]: time="2024-02-08T23:23:19.977863143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:19.978446 env[1196]: time="2024-02-08T23:23:19.978380519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405 pid=4034 runtime=io.containerd.runc.v2 Feb 8 23:23:19.988154 systemd-networkd[1070]: cali4355ef0074f: Link UP Feb 8 23:23:19.991068 systemd-networkd[1070]: cali4355ef0074f: Gained carrier Feb 8 23:23:19.991920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4355ef0074f: link becomes ready Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.843 [INFO][3972] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--bvd5v-eth0 coredns-787d4945fb- kube-system 53d36018-7aad-443b-be14-946096d7c23e 845 0 2024-02-08 23:22:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-bvd5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4355ef0074f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.843 [INFO][3972] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.911 [INFO][3994] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" HandleID="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.930 [INFO][3994] ipam_plugin.go 268: Auto assigning IP ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" HandleID="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004dbcd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-bvd5v", "timestamp":"2024-02-08 23:23:19.911795539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.930 [INFO][3994] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.930 [INFO][3994] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.930 [INFO][3994] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.934 [INFO][3994] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.938 [INFO][3994] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.958 [INFO][3994] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.960 [INFO][3994] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.965 [INFO][3994] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.965 [INFO][3994] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.967 [INFO][3994] ipam.go 1682: Creating new handle: k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.969 [INFO][3994] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.973 [INFO][3994] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.973 [INFO][3994] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" host="localhost" Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.973 [INFO][3994] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:20.006689 env[1196]: 2024-02-08 23:23:19.973 [INFO][3994] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" HandleID="k8s-pod-network.43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:19.974 [INFO][3972] k8s.go 385: Populated endpoint ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--bvd5v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"53d36018-7aad-443b-be14-946096d7c23e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-bvd5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4355ef0074f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:19.974 [INFO][3972] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:19.975 [INFO][3972] dataplane_linux.go 68: Setting the host side veth name to cali4355ef0074f ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:19.991 [INFO][3972] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:19.992 [INFO][3972] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--bvd5v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"53d36018-7aad-443b-be14-946096d7c23e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c", Pod:"coredns-787d4945fb-bvd5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4355ef0074f", MAC:"b2:9b:72:0e:cc:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:20.007273 env[1196]: 2024-02-08 23:23:20.003 [INFO][3972] k8s.go 491: Wrote updated endpoint to datastore ContainerID="43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c" Namespace="kube-system" Pod="coredns-787d4945fb-bvd5v" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:23:20.011598 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:23:20.015000 audit[4068]: NETFILTER_CFG table=filter:117 family=2 entries=34 op=nft_register_chain pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:20.015000 audit[4068]: SYSCALL arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7ffda6a05d40 a2=0 a3=7ffda6a05d2c items=0 ppid=3552 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:20.015000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:20.033715 env[1196]: time="2024-02-08T23:23:20.033678765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v9kwz,Uid:04f8b771-04a3-4156-98db-f84147d5ca2e,Namespace:kube-system,Attempt:1,} returns sandbox id \"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405\"" Feb 8 23:23:20.039788 kubelet[2143]: E0208 23:23:20.034429 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:20.041712 env[1196]: time="2024-02-08T23:23:20.035948615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:20.041712 env[1196]: time="2024-02-08T23:23:20.035978461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:20.041712 env[1196]: time="2024-02-08T23:23:20.035987508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:20.041712 env[1196]: time="2024-02-08T23:23:20.036121933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c pid=4089 runtime=io.containerd.runc.v2 Feb 8 23:23:20.052219 env[1196]: time="2024-02-08T23:23:20.052186541Z" level=info msg="CreateContainer within sandbox \"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:23:20.071314 env[1196]: time="2024-02-08T23:23:20.071265588Z" level=info msg="CreateContainer within sandbox \"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb99c82b6339beda3bc148db2b873bc90a3a2748e45b412a4669e92cefc48880\"" Feb 8 23:23:20.073532 env[1196]: time="2024-02-08T23:23:20.071654405Z" level=info msg="StartContainer for \"bb99c82b6339beda3bc148db2b873bc90a3a2748e45b412a4669e92cefc48880\"" Feb 8 23:23:20.090922 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:23:20.129259 env[1196]: time="2024-02-08T23:23:20.129201705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bvd5v,Uid:53d36018-7aad-443b-be14-946096d7c23e,Namespace:kube-system,Attempt:1,} returns sandbox id \"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c\"" Feb 8 23:23:20.129780 kubelet[2143]: E0208 23:23:20.129751 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:20.135056 env[1196]: time="2024-02-08T23:23:20.135013715Z" level=info msg="CreateContainer within sandbox \"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:23:20.158191 env[1196]: time="2024-02-08T23:23:20.158151088Z" level=info msg="StartContainer for \"bb99c82b6339beda3bc148db2b873bc90a3a2748e45b412a4669e92cefc48880\" returns successfully" Feb 8 23:23:20.164772 env[1196]: time="2024-02-08T23:23:20.164738136Z" level=info msg="CreateContainer within sandbox \"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e267a35e70c95d01a792b426d8a3686fa4208871135704899dd13b08fab4fc9\"" Feb 8 23:23:20.166220 env[1196]: time="2024-02-08T23:23:20.166196307Z" level=info msg="StartContainer for \"0e267a35e70c95d01a792b426d8a3686fa4208871135704899dd13b08fab4fc9\"" Feb 8 23:23:20.219882 env[1196]: time="2024-02-08T23:23:20.219823373Z" level=info msg="StartContainer for \"0e267a35e70c95d01a792b426d8a3686fa4208871135704899dd13b08fab4fc9\" returns successfully" Feb 8 23:23:20.602181 env[1196]: time="2024-02-08T23:23:20.602128308Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:20.603739 env[1196]: time="2024-02-08T23:23:20.603715214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:20.611815 env[1196]: time="2024-02-08T23:23:20.611794877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:20.613466 env[1196]: time="2024-02-08T23:23:20.613428011Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:20.614116 env[1196]: time="2024-02-08T23:23:20.614089513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 8 23:23:20.622022 env[1196]: time="2024-02-08T23:23:20.621992342Z" level=info msg="CreateContainer within sandbox \"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 8 23:23:20.624642 env[1196]: time="2024-02-08T23:23:20.624625169Z" level=info msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" Feb 8 23:23:20.637439 env[1196]: time="2024-02-08T23:23:20.637396389Z" level=info msg="CreateContainer within sandbox \"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9de929b19ac122eb246996f64071b38405fcb01c35b3918e3b439c8a9ef2af24\"" Feb 8 23:23:20.637899 env[1196]: time="2024-02-08T23:23:20.637882861Z" level=info msg="StartContainer for \"9de929b19ac122eb246996f64071b38405fcb01c35b3918e3b439c8a9ef2af24\"" Feb 8 23:23:20.659975 sshd[3931]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:20.660633 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:45350.service. Feb 8 23:23:20.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.76:22-10.0.0.1:45350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:20.676000 audit[3931]: USER_END pid=3931 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.676000 audit[3931]: CRED_DISP pid=3931 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.76:22-10.0.0.1:45338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:20.678592 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:45338.service: Deactivated successfully. Feb 8 23:23:20.679379 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:23:20.680782 systemd-logind[1177]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:23:20.681675 systemd-logind[1177]: Removed session 15. Feb 8 23:23:20.707000 audit[4241]: USER_ACCT pid=4241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.708090 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 45350 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:20.707000 audit[4241]: CRED_ACQ pid=4241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.708000 audit[4241]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaf295470 a2=3 a3=0 items=0 ppid=1 pid=4241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:20.708000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:20.709181 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:20.710917 env[1196]: time="2024-02-08T23:23:20.710879624Z" level=info msg="StartContainer for \"9de929b19ac122eb246996f64071b38405fcb01c35b3918e3b439c8a9ef2af24\" returns successfully" Feb 8 23:23:20.713355 systemd[1]: Started session-16.scope. Feb 8 23:23:20.713629 systemd-logind[1177]: New session 16 of user core. Feb 8 23:23:20.717000 audit[4241]: USER_START pid=4241 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.718000 audit[4271]: CRED_ACQ pid=4271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.701 [INFO][4225] k8s.go 578: Cleaning up netns ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.701 [INFO][4225] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" iface="eth0" netns="/var/run/netns/cni-f70363ba-e2ab-1d81-6288-fc1de87a5a89" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.701 [INFO][4225] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" iface="eth0" netns="/var/run/netns/cni-f70363ba-e2ab-1d81-6288-fc1de87a5a89" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.702 [INFO][4225] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" iface="eth0" netns="/var/run/netns/cni-f70363ba-e2ab-1d81-6288-fc1de87a5a89" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.702 [INFO][4225] k8s.go 585: Releasing IP address(es) ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.702 [INFO][4225] utils.go 188: Calico CNI releasing IP address ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.717 [INFO][4259] ipam_plugin.go 415: Releasing address using handleID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.718 [INFO][4259] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.718 [INFO][4259] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.724 [WARNING][4259] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.724 [INFO][4259] ipam_plugin.go 443: Releasing address using workloadID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.725 [INFO][4259] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:20.728157 env[1196]: 2024-02-08 23:23:20.726 [INFO][4225] k8s.go 591: Teardown processing complete. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:23:20.728507 env[1196]: time="2024-02-08T23:23:20.728280824Z" level=info msg="TearDown network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" successfully" Feb 8 23:23:20.728507 env[1196]: time="2024-02-08T23:23:20.728314929Z" level=info msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" returns successfully" Feb 8 23:23:20.728883 env[1196]: time="2024-02-08T23:23:20.728855973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k779c,Uid:2f044ac1-9cb8-43bc-bcbe-22f291a59d64,Namespace:calico-system,Attempt:1,}" Feb 8 23:23:20.795882 kubelet[2143]: E0208 23:23:20.795505 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:20.800230 kubelet[2143]: E0208 23:23:20.800207 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:20.820362 kubelet[2143]: I0208 23:23:20.818833 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bvd5v" podStartSLOduration=55.818755603 pod.CreationTimestamp="2024-02-08 23:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:20.804710049 +0000 UTC m=+68.318129302" watchObservedRunningTime="2024-02-08 23:23:20.818755603 +0000 UTC m=+68.332174846" Feb 8 23:23:20.844332 sshd[4241]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:20.848000 audit[4241]: USER_END pid=4241 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.848000 audit[4241]: CRED_DISP pid=4241 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:20.852156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860577954.mount: Deactivated successfully. Feb 8 23:23:20.852281 systemd[1]: run-netns-cni\x2df70363ba\x2de2ab\x2d1d81\x2d6288\x2dfc1de87a5a89.mount: Deactivated successfully. Feb 8 23:23:20.855555 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:45350.service: Deactivated successfully. Feb 8 23:23:20.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.76:22-10.0.0.1:45350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:20.857101 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:23:20.858417 systemd-logind[1177]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:23:20.861208 systemd-logind[1177]: Removed session 16. Feb 8 23:23:20.861533 kubelet[2143]: I0208 23:23:20.861514 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-v9kwz" podStartSLOduration=55.861481459 pod.CreationTimestamp="2024-02-08 23:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:20.844076954 +0000 UTC m=+68.357496207" watchObservedRunningTime="2024-02-08 23:23:20.861481459 +0000 UTC m=+68.374900712" Feb 8 23:23:20.907000 audit[4346]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=4346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:20.907000 audit[4346]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc8fec6fd0 a2=0 a3=7ffc8fec6fbc items=0 ppid=2312 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:20.909295 systemd-networkd[1070]: cali38834df9488: Link UP Feb 8 23:23:20.911819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali38834df9488: link becomes ready Feb 8 23:23:20.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:20.912000 audit[4346]: NETFILTER_CFG table=nat:119 family=2 entries=30 op=nft_register_rule pid=4346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:20.912000 audit[4346]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc8fec6fd0 a2=0 a3=7ffc8fec6fbc items=0 ppid=2312 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:20.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:20.916077 systemd-networkd[1070]: cali38834df9488: Gained carrier Feb 8 23:23:20.918367 kubelet[2143]: I0208 23:23:20.917805 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c96d6f8c9-hs9cp" podStartSLOduration=-9.223371988937021e+09 pod.CreationTimestamp="2024-02-08 23:22:33 +0000 UTC" firstStartedPulling="2024-02-08 23:23:17.129144602 +0000 UTC m=+64.642563855" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:20.874859348 +0000 UTC m=+68.388278601" watchObservedRunningTime="2024-02-08 23:23:20.917754276 +0000 UTC m=+68.431173529" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.775 [INFO][4273] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k779c-eth0 csi-node-driver- calico-system 2f044ac1-9cb8-43bc-bcbe-22f291a59d64 882 0 2024-02-08 23:22:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-k779c eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali38834df9488 [] []}} ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.775 [INFO][4273] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.864 [INFO][4294] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" HandleID="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.878 [INFO][4294] ipam_plugin.go 268: Auto assigning IP ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" HandleID="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Workload="localhost-k8s-csi--node--driver--k779c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a99f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k779c", "timestamp":"2024-02-08 23:23:20.86425856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.878 [INFO][4294] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.878 [INFO][4294] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.878 [INFO][4294] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.882 [INFO][4294] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.885 [INFO][4294] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.891 [INFO][4294] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.892 [INFO][4294] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.895 [INFO][4294] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.895 [INFO][4294] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.896 [INFO][4294] ipam.go 1682: Creating new handle: k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826 Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.899 [INFO][4294] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.903 [INFO][4294] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.903 [INFO][4294] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" host="localhost" Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.903 [INFO][4294] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:20.919908 env[1196]: 2024-02-08 23:23:20.903 [INFO][4294] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" HandleID="k8s-pod-network.742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.905 [INFO][4273] k8s.go 385: Populated endpoint ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k779c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f044ac1-9cb8-43bc-bcbe-22f291a59d64", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k779c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali38834df9488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.905 [INFO][4273] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.905 [INFO][4273] dataplane_linux.go 68: Setting the host side veth name to cali38834df9488 ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.910 [INFO][4273] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.910 [INFO][4273] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k779c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f044ac1-9cb8-43bc-bcbe-22f291a59d64", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826", Pod:"csi-node-driver-k779c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali38834df9488", MAC:"6a:07:4d:b9:c1:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:20.920437 env[1196]: 2024-02-08 23:23:20.917 [INFO][4273] k8s.go 491: Wrote updated endpoint to datastore ContainerID="742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826" Namespace="calico-system" Pod="csi-node-driver-k779c" WorkloadEndpoint="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:23:20.938000 audit[4379]: NETFILTER_CFG table=filter:120 family=2 entries=42 op=nft_register_chain pid=4379 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:20.938000 audit[4379]: SYSCALL arch=c000003e syscall=46 success=yes exit=20696 a0=3 a1=7fff36157e00 a2=0 a3=7fff36157dec items=0 ppid=3552 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:20.938000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:21.086911 systemd-networkd[1070]: calie5b500c48d1: Gained IPv6LL Feb 8 23:23:21.109000 audit[4396]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:21.109000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcc44381a0 a2=0 a3=7ffcc443818c items=0 ppid=2312 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:21.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:21.112689 env[1196]: time="2024-02-08T23:23:21.112595045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:21.112689 env[1196]: time="2024-02-08T23:23:21.112656532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:21.112689 env[1196]: time="2024-02-08T23:23:21.112667292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:21.113143 env[1196]: time="2024-02-08T23:23:21.113027113Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826 pid=4399 runtime=io.containerd.runc.v2 Feb 8 23:23:21.120000 audit[4396]: NETFILTER_CFG table=nat:122 family=2 entries=63 op=nft_register_chain pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:21.120000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcc44381a0 a2=0 a3=7ffcc443818c items=0 ppid=2312 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:21.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:21.145527 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:23:21.157231 env[1196]: time="2024-02-08T23:23:21.157185596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k779c,Uid:2f044ac1-9cb8-43bc-bcbe-22f291a59d64,Namespace:calico-system,Attempt:1,} returns sandbox id \"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826\"" Feb 8 23:23:21.159002 env[1196]: time="2024-02-08T23:23:21.158982530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 8 23:23:21.805965 kubelet[2143]: E0208 23:23:21.805938 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:21.806385 kubelet[2143]: E0208 23:23:21.806006 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:22.045907 systemd-networkd[1070]: cali4355ef0074f: Gained IPv6LL Feb 8 23:23:22.807319 kubelet[2143]: E0208 23:23:22.807285 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:22.808713 kubelet[2143]: E0208 23:23:22.808084 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:22.809241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863707037.mount: Deactivated successfully. Feb 8 23:23:22.813932 systemd-networkd[1070]: cali38834df9488: Gained IPv6LL Feb 8 23:23:23.519755 env[1196]: time="2024-02-08T23:23:23.519706843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:23.521733 env[1196]: time="2024-02-08T23:23:23.521711671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:23.523135 env[1196]: time="2024-02-08T23:23:23.523106464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:23.524359 env[1196]: time="2024-02-08T23:23:23.524335503Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:23.524818 env[1196]: time="2024-02-08T23:23:23.524796827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 8 23:23:23.526073 env[1196]: time="2024-02-08T23:23:23.526049160Z" level=info msg="CreateContainer within sandbox \"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 8 23:23:23.537932 env[1196]: time="2024-02-08T23:23:23.537899585Z" level=info msg="CreateContainer within sandbox \"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"88761a6c92626fe632001b9b705305d4bc5aa0d04b3923f28f32a9f574fc41ce\"" Feb 8 23:23:23.538287 env[1196]: time="2024-02-08T23:23:23.538250841Z" level=info msg="StartContainer for \"88761a6c92626fe632001b9b705305d4bc5aa0d04b3923f28f32a9f574fc41ce\"" Feb 8 23:23:23.582978 env[1196]: time="2024-02-08T23:23:23.582932658Z" level=info msg="StartContainer for \"88761a6c92626fe632001b9b705305d4bc5aa0d04b3923f28f32a9f574fc41ce\" returns successfully" Feb 8 23:23:23.583993 env[1196]: time="2024-02-08T23:23:23.583964664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 8 23:23:25.322643 env[1196]: time="2024-02-08T23:23:25.322593961Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:25.324354 env[1196]: time="2024-02-08T23:23:25.324305095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:25.325703 env[1196]: time="2024-02-08T23:23:25.325674590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:25.327009 env[1196]: time="2024-02-08T23:23:25.326978301Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:25.327376 env[1196]: time="2024-02-08T23:23:25.327343624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 8 23:23:25.328996 env[1196]: time="2024-02-08T23:23:25.328969305Z" level=info msg="CreateContainer within sandbox \"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 8 23:23:25.341318 env[1196]: time="2024-02-08T23:23:25.341268346Z" level=info msg="CreateContainer within sandbox \"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cf772696d71de69a7f270223d6a9c80b62de4949d50da824368530ec651876b7\"" Feb 8 23:23:25.341814 env[1196]: time="2024-02-08T23:23:25.341754828Z" level=info msg="StartContainer for \"cf772696d71de69a7f270223d6a9c80b62de4949d50da824368530ec651876b7\"" Feb 8 23:23:25.391438 env[1196]: time="2024-02-08T23:23:25.391374676Z" level=info msg="StartContainer for \"cf772696d71de69a7f270223d6a9c80b62de4949d50da824368530ec651876b7\" returns successfully" Feb 8 23:23:25.705357 kubelet[2143]: I0208 23:23:25.705318 2143 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 8 23:23:25.705870 kubelet[2143]: I0208 23:23:25.705660 2143 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 8 23:23:25.823148 kubelet[2143]: I0208 23:23:25.823106 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-k779c" podStartSLOduration=-9.223371984031712e+09 pod.CreationTimestamp="2024-02-08 23:22:33 +0000 UTC" firstStartedPulling="2024-02-08 23:23:21.15851257 +0000 UTC m=+68.671931823" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:25.822773246 +0000 UTC m=+73.336192499" watchObservedRunningTime="2024-02-08 23:23:25.823064338 +0000 UTC m=+73.336483601" Feb 8 23:23:25.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.76:22-10.0.0.1:45358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.847234 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:45358.service. Feb 8 23:23:25.848059 kernel: kauditd_printk_skb: 44 callbacks suppressed Feb 8 23:23:25.848113 kernel: audit: type=1130 audit(1707434605.846:393): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.76:22-10.0.0.1:45358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.884000 audit[4514]: USER_ACCT pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.885361 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 45358 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:25.887000 audit[4514]: CRED_ACQ pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.888339 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:25.890456 kernel: audit: type=1101 audit(1707434605.884:394): pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.890512 kernel: audit: type=1103 audit(1707434605.887:395): pid=4514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.890536 kernel: audit: type=1006 audit(1707434605.887:396): pid=4514 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 8 23:23:25.887000 audit[4514]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb00068c0 a2=3 a3=0 items=0 ppid=1 pid=4514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:25.892628 systemd-logind[1177]: New session 17 of user core. Feb 8 23:23:25.893827 systemd[1]: Started session-17.scope. Feb 8 23:23:25.894624 kernel: audit: type=1300 audit(1707434605.887:396): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb00068c0 a2=3 a3=0 items=0 ppid=1 pid=4514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:25.894666 kernel: audit: type=1327 audit(1707434605.887:396): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:25.887000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:25.897000 audit[4514]: USER_START pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.899000 audit[4517]: CRED_ACQ pid=4517 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.903983 kernel: audit: type=1105 audit(1707434605.897:397): pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.904030 kernel: audit: type=1103 audit(1707434605.899:398): pid=4517 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.994512 sshd[4514]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:25.994000 audit[4514]: USER_END pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.996868 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:45358.service: Deactivated successfully. Feb 8 23:23:25.997822 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:23:25.994000 audit[4514]: CRED_DISP pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.999456 systemd-logind[1177]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:23:26.000276 systemd-logind[1177]: Removed session 17. Feb 8 23:23:26.000672 kernel: audit: type=1106 audit(1707434605.994:399): pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:26.000729 kernel: audit: type=1104 audit(1707434605.994:400): pid=4514 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:25.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.76:22-10.0.0.1:45358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.624346 kubelet[2143]: E0208 23:23:30.624304 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:30.997919 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:40698.service. Feb 8 23:23:30.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.76:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.998951 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:30.999000 kernel: audit: type=1130 audit(1707434610.997:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.76:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.030000 audit[4534]: USER_ACCT pid=4534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.031484 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 40698 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:31.032861 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:31.031000 audit[4534]: CRED_ACQ pid=4534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.036130 systemd-logind[1177]: New session 18 of user core. Feb 8 23:23:31.036571 kernel: audit: type=1101 audit(1707434611.030:403): pid=4534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.036612 kernel: audit: type=1103 audit(1707434611.031:404): pid=4534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.036636 kernel: audit: type=1006 audit(1707434611.031:405): pid=4534 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 8 23:23:31.036841 systemd[1]: Started session-18.scope. Feb 8 23:23:31.044303 kernel: audit: type=1300 audit(1707434611.031:405): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0f43e0d0 a2=3 a3=0 items=0 ppid=1 pid=4534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:31.044342 kernel: audit: type=1327 audit(1707434611.031:405): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:31.044356 kernel: audit: type=1105 audit(1707434611.040:406): pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.031000 audit[4534]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0f43e0d0 a2=3 a3=0 items=0 ppid=1 pid=4534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:31.047408 kernel: audit: type=1103 audit(1707434611.041:407): pid=4537 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:31.040000 audit[4534]: USER_START pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.041000 audit[4537]: CRED_ACQ pid=4537 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.134710 sshd[4534]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:31.134000 audit[4534]: USER_END pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.136961 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:40698.service: Deactivated successfully. Feb 8 23:23:31.137704 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:23:31.138607 systemd-logind[1177]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:23:31.139293 systemd-logind[1177]: Removed session 18. Feb 8 23:23:31.134000 audit[4534]: CRED_DISP pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.152215 kernel: audit: type=1106 audit(1707434611.134:408): pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.152329 kernel: audit: type=1104 audit(1707434611.134:409): pid=4534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:31.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.76:22-10.0.0.1:40698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.624272 kubelet[2143]: E0208 23:23:31.624235 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:32.624570 kubelet[2143]: E0208 23:23:32.624527 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:34.241414 systemd[1]: run-containerd-runc-k8s.io-9de929b19ac122eb246996f64071b38405fcb01c35b3918e3b439c8a9ef2af24-runc.fK3JhK.mount: Deactivated successfully. Feb 8 23:23:36.138000 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:40700.service. Feb 8 23:23:36.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.76:22-10.0.0.1:40700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.139041 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:36.139096 kernel: audit: type=1130 audit(1707434616.136:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.76:22-10.0.0.1:40700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.169000 audit[4575]: USER_ACCT pid=4575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.171174 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 40700 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:36.172000 audit[4575]: CRED_ACQ pid=4575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.174441 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:36.177663 kernel: audit: type=1101 audit(1707434616.169:412): pid=4575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.177729 kernel: audit: type=1103 audit(1707434616.172:413): pid=4575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.177783 kernel: audit: type=1006 audit(1707434616.172:414): pid=4575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 8 23:23:36.179126 systemd-logind[1177]: New session 19 of user core. Feb 8 23:23:36.179502 systemd[1]: Started session-19.scope. Feb 8 23:23:36.172000 audit[4575]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc95b0a3d0 a2=3 a3=0 items=0 ppid=1 pid=4575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:36.183512 kernel: audit: type=1300 audit(1707434616.172:414): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc95b0a3d0 a2=3 a3=0 items=0 ppid=1 pid=4575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:36.172000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:36.185785 kernel: audit: type=1327 audit(1707434616.172:414): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:36.184000 audit[4575]: USER_START pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.190035 kernel: audit: type=1105 audit(1707434616.184:415): pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.190137 kernel: audit: type=1103 audit(1707434616.188:416): pid=4578 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.188000 audit[4578]: CRED_ACQ pid=4578 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.312359 sshd[4575]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:36.311000 audit[4575]: USER_END pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.315004 systemd-logind[1177]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:23:36.315273 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:40700.service: Deactivated successfully. Feb 8 23:23:36.316246 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:23:36.316840 systemd-logind[1177]: Removed session 19. Feb 8 23:23:36.317812 kernel: audit: type=1106 audit(1707434616.311:417): pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.311000 audit[4575]: CRED_DISP pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:36.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.76:22-10.0.0.1:40700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.321784 kernel: audit: type=1104 audit(1707434616.311:418): pid=4575 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.76:22-10.0.0.1:45668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:41.314794 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:45668.service. Feb 8 23:23:41.318190 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:41.318253 kernel: audit: type=1130 audit(1707434621.313:420): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.76:22-10.0.0.1:45668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:41.463000 audit[4592]: USER_ACCT pid=4592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.465756 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 45668 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:41.468206 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:41.466000 audit[4592]: CRED_ACQ pid=4592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.471535 kernel: audit: type=1101 audit(1707434621.463:421): pid=4592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.471596 kernel: audit: type=1103 audit(1707434621.466:422): pid=4592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.471732 systemd-logind[1177]: New session 20 of user core. Feb 8 23:23:41.472541 systemd[1]: Started session-20.scope. Feb 8 23:23:41.473830 kernel: audit: type=1006 audit(1707434621.466:423): pid=4592 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 8 23:23:41.473882 kernel: audit: type=1300 audit(1707434621.466:423): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a5dd9c0 a2=3 a3=0 items=0 ppid=1 pid=4592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:41.466000 audit[4592]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a5dd9c0 a2=3 a3=0 items=0 ppid=1 pid=4592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:41.477274 kernel: audit: type=1327 audit(1707434621.466:423): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:41.466000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:41.476000 audit[4592]: USER_START pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.482844 kernel: audit: type=1105 audit(1707434621.476:424): pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.482935 kernel: audit: type=1103 audit(1707434621.477:425): pid=4595 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.477000 audit[4595]: CRED_ACQ pid=4595 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.577328 sshd[4592]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:41.576000 audit[4592]: USER_END pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.579574 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:45668.service: Deactivated successfully. Feb 8 23:23:41.580601 systemd-logind[1177]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:23:41.580625 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:23:41.576000 audit[4592]: CRED_DISP pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.581554 systemd-logind[1177]: Removed session 20. Feb 8 23:23:41.583407 kernel: audit: type=1106 audit(1707434621.576:426): pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.583486 kernel: audit: type=1104 audit(1707434621.576:427): pid=4592 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:41.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.76:22-10.0.0.1:45668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:42.373133 kubelet[2143]: E0208 23:23:42.373093 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:46.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.76:22-10.0.0.1:45672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:46.579923 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:45672.service. Feb 8 23:23:46.583117 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:23:46.583172 kernel: audit: type=1130 audit(1707434626.579:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.76:22-10.0.0.1:45672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:46.626000 audit[4629]: USER_ACCT pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.627552 sshd[4629]: Accepted publickey for core from 10.0.0.1 port 45672 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:46.629000 audit[4629]: CRED_ACQ pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.631000 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:46.633574 kernel: audit: type=1101 audit(1707434626.626:430): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.633627 kernel: audit: type=1103 audit(1707434626.629:431): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.633648 kernel: audit: type=1006 audit(1707434626.629:432): pid=4629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Feb 8 23:23:46.634410 systemd-logind[1177]: New session 21 of user core. Feb 8 23:23:46.635324 systemd[1]: Started session-21.scope. Feb 8 23:23:46.629000 audit[4629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe582ba980 a2=3 a3=0 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:46.639070 kernel: audit: type=1300 audit(1707434626.629:432): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe582ba980 a2=3 a3=0 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:46.629000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:46.640546 kernel: audit: type=1327 audit(1707434626.629:432): proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:46.640609 kernel: audit: type=1105 audit(1707434626.638:433): pid=4629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.638000 audit[4629]: USER_START pid=4629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.639000 audit[4632]: CRED_ACQ pid=4632 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.646230 kernel: audit: type=1103 audit(1707434626.639:434): pid=4632 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.744568 sshd[4629]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:46.747449 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:45682.service. Feb 8 23:23:46.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.76:22-10.0.0.1:45682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:46.750000 audit[4629]: USER_END pid=4629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.751786 kernel: audit: type=1130 audit(1707434626.746:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.76:22-10.0.0.1:45682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:46.751839 kernel: audit: type=1106 audit(1707434626.750:436): pid=4629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.752942 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:45672.service: Deactivated successfully. Feb 8 23:23:46.753830 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:23:46.750000 audit[4629]: CRED_DISP pid=4629 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.76:22-10.0.0.1:45672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:46.756603 systemd-logind[1177]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:23:46.757281 systemd-logind[1177]: Removed session 21. Feb 8 23:23:46.779000 audit[4641]: USER_ACCT pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.780807 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:46.780000 audit[4641]: CRED_ACQ pid=4641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.780000 audit[4641]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3beceb10 a2=3 a3=0 items=0 ppid=1 pid=4641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:46.780000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:46.781900 sshd[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:46.785193 systemd-logind[1177]: New session 22 of user core. Feb 8 23:23:46.785927 systemd[1]: Started session-22.scope. Feb 8 23:23:46.789000 audit[4641]: USER_START pid=4641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:46.790000 audit[4646]: CRED_ACQ pid=4646 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.118870 sshd[4641]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:47.119000 audit[4641]: USER_END pid=4641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.121040 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:45690.service. Feb 8 23:23:47.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.76:22-10.0.0.1:45690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:47.120000 audit[4641]: CRED_DISP pid=4641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.124671 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:45682.service: Deactivated successfully. Feb 8 23:23:47.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.76:22-10.0.0.1:45682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:47.125853 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:23:47.126345 systemd-logind[1177]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:23:47.127143 systemd-logind[1177]: Removed session 22. Feb 8 23:23:47.158000 audit[4654]: USER_ACCT pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.159977 sshd[4654]: Accepted publickey for core from 10.0.0.1 port 45690 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:47.159000 audit[4654]: CRED_ACQ pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.160000 audit[4654]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffef1e9de0 a2=3 a3=0 items=0 ppid=1 pid=4654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:47.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:47.161083 sshd[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.164542 systemd-logind[1177]: New session 23 of user core. Feb 8 23:23:47.165281 systemd[1]: Started session-23.scope. Feb 8 23:23:47.168000 audit[4654]: USER_START pid=4654 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.170000 audit[4659]: CRED_ACQ pid=4659 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:47.623653 kubelet[2143]: E0208 23:23:47.623623 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:47.799447 kubelet[2143]: I0208 23:23:47.799373 2143 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:23:47.847929 kubelet[2143]: I0208 23:23:47.847899 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx4d\" (UniqueName: \"kubernetes.io/projected/c115be70-dfce-4e8c-8de8-5c45e5f51c39-kube-api-access-rsx4d\") pod \"calico-apiserver-d84f5f897-4lnfm\" (UID: \"c115be70-dfce-4e8c-8de8-5c45e5f51c39\") " pod="calico-apiserver/calico-apiserver-d84f5f897-4lnfm" Feb 8 23:23:47.848150 kubelet[2143]: I0208 23:23:47.848138 2143 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c115be70-dfce-4e8c-8de8-5c45e5f51c39-calico-apiserver-certs\") pod \"calico-apiserver-d84f5f897-4lnfm\" (UID: \"c115be70-dfce-4e8c-8de8-5c45e5f51c39\") " pod="calico-apiserver/calico-apiserver-d84f5f897-4lnfm" Feb 8 23:23:47.861000 audit[4699]: NETFILTER_CFG table=filter:123 family=2 entries=7 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:47.861000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcbbd6db20 a2=0 a3=7ffcbbd6db0c items=0 ppid=2312 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:47.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:47.864000 audit[4699]: NETFILTER_CFG table=nat:124 family=2 entries=78 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:47.864000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcbbd6db20 a2=0 a3=7ffcbbd6db0c items=0 ppid=2312 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:47.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:47.903000 audit[4725]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:47.903000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff151ed970 a2=0 a3=7fff151ed95c items=0 ppid=2312 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:47.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:47.904000 audit[4725]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:47.904000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff151ed970 a2=0 a3=7fff151ed95c items=0 ppid=2312 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:47.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:47.949838 kubelet[2143]: E0208 23:23:47.949806 2143 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 8 23:23:47.950525 kubelet[2143]: E0208 23:23:47.950491 2143 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c115be70-dfce-4e8c-8de8-5c45e5f51c39-calico-apiserver-certs podName:c115be70-dfce-4e8c-8de8-5c45e5f51c39 nodeName:}" failed. No retries permitted until 2024-02-08 23:23:48.449879211 +0000 UTC m=+95.963298464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c115be70-dfce-4e8c-8de8-5c45e5f51c39-calico-apiserver-certs") pod "calico-apiserver-d84f5f897-4lnfm" (UID: "c115be70-dfce-4e8c-8de8-5c45e5f51c39") : secret "calico-apiserver-certs" not found Feb 8 23:23:48.705360 env[1196]: time="2024-02-08T23:23:48.705297524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d84f5f897-4lnfm,Uid:c115be70-dfce-4e8c-8de8-5c45e5f51c39,Namespace:calico-apiserver,Attempt:0,}" Feb 8 23:23:48.762460 sshd[4654]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:48.763000 audit[4654]: USER_END pid=4654 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.763000 audit[4654]: CRED_DISP pid=4654 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.76:22-10.0.0.1:47322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:48.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.76:22-10.0.0.1:45690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:48.764886 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:47322.service. Feb 8 23:23:48.765626 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:45690.service: Deactivated successfully. Feb 8 23:23:48.766477 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:23:48.768643 systemd-logind[1177]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:23:48.771949 systemd-logind[1177]: Removed session 23. Feb 8 23:23:48.807321 sshd[4737]: Accepted publickey for core from 10.0.0.1 port 47322 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:48.806000 audit[4737]: USER_ACCT pid=4737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.807000 audit[4737]: CRED_ACQ pid=4737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.807000 audit[4737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd77e859c0 a2=3 a3=0 items=0 ppid=1 pid=4737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:48.807000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:48.808392 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:48.813010 systemd[1]: Started session-24.scope. Feb 8 23:23:48.813317 systemd-logind[1177]: New session 24 of user core. Feb 8 23:23:48.819000 audit[4737]: USER_START pid=4737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.820000 audit[4752]: CRED_ACQ pid=4752 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:48.877854 systemd-networkd[1070]: cali1878cd2eaf3: Link UP Feb 8 23:23:48.883743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:23:48.883830 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1878cd2eaf3: link becomes ready Feb 8 23:23:48.883380 systemd-networkd[1070]: cali1878cd2eaf3: Gained carrier Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.799 [INFO][4728] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0 calico-apiserver-d84f5f897- calico-apiserver c115be70-dfce-4e8c-8de8-5c45e5f51c39 1086 0 2024-02-08 23:23:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d84f5f897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d84f5f897-4lnfm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1878cd2eaf3 [] []}} ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.799 [INFO][4728] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.828 [INFO][4744] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" HandleID="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Workload="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.846 [INFO][4744] ipam_plugin.go 268: Auto assigning IP ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" HandleID="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Workload="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d84f5f897-4lnfm", "timestamp":"2024-02-08 23:23:48.828502242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.846 [INFO][4744] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.846 [INFO][4744] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.846 [INFO][4744] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.847 [INFO][4744] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.851 [INFO][4744] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.861 [INFO][4744] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.862 [INFO][4744] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.865 [INFO][4744] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.865 [INFO][4744] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.866 [INFO][4744] ipam.go 1682: Creating new handle: k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.869 [INFO][4744] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.873 [INFO][4744] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.873 [INFO][4744] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" host="localhost" Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.873 [INFO][4744] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:23:48.897799 env[1196]: 2024-02-08 23:23:48.873 [INFO][4744] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" HandleID="k8s-pod-network.8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Workload="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.875 [INFO][4728] k8s.go 385: Populated endpoint ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0", GenerateName:"calico-apiserver-d84f5f897-", Namespace:"calico-apiserver", SelfLink:"", UID:"c115be70-dfce-4e8c-8de8-5c45e5f51c39", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d84f5f897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d84f5f897-4lnfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1878cd2eaf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.876 [INFO][4728] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.876 [INFO][4728] dataplane_linux.go 68: Setting the host side veth name to cali1878cd2eaf3 ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.884 [INFO][4728] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.885 [INFO][4728] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0", GenerateName:"calico-apiserver-d84f5f897-", Namespace:"calico-apiserver", SelfLink:"", UID:"c115be70-dfce-4e8c-8de8-5c45e5f51c39", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d84f5f897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d", Pod:"calico-apiserver-d84f5f897-4lnfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1878cd2eaf3", MAC:"fe:1b:98:3c:e8:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:23:48.898585 env[1196]: 2024-02-08 23:23:48.891 [INFO][4728] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d" Namespace="calico-apiserver" Pod="calico-apiserver-d84f5f897-4lnfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--d84f5f897--4lnfm-eth0" Feb 8 23:23:48.914000 audit[4779]: NETFILTER_CFG table=filter:127 family=2 entries=59 op=nft_register_chain pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:23:48.914000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7fffa0903fe0 a2=0 a3=7fffa0903fcc items=0 ppid=3552 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:48.914000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:23:48.957000 audit[4806]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:48.957000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffd1942a370 a2=0 a3=7ffd1942a35c items=0 ppid=2312 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:48.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:48.958000 audit[4806]: NETFILTER_CFG table=nat:129 family=2 entries=78 op=nft_register_rule pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:48.958000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd1942a370 a2=0 a3=7ffd1942a35c items=0 ppid=2312 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:48.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:48.977699 env[1196]: time="2024-02-08T23:23:48.977632908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:48.977881 env[1196]: time="2024-02-08T23:23:48.977677613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:48.977881 env[1196]: time="2024-02-08T23:23:48.977688334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:48.978065 env[1196]: time="2024-02-08T23:23:48.977982312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d pid=4814 runtime=io.containerd.runc.v2 Feb 8 23:23:49.009527 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:23:49.034163 env[1196]: time="2024-02-08T23:23:49.034113530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d84f5f897-4lnfm,Uid:c115be70-dfce-4e8c-8de8-5c45e5f51c39,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d\"" Feb 8 23:23:49.035672 env[1196]: time="2024-02-08T23:23:49.035641852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 8 23:23:49.264719 sshd[4737]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:49.265000 audit[4737]: USER_END pid=4737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.265000 audit[4737]: CRED_DISP pid=4737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.267007 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:47324.service. Feb 8 23:23:49.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.76:22-10.0.0.1:47324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:49.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.76:22-10.0.0.1:47322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:49.267669 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:47322.service: Deactivated successfully. Feb 8 23:23:49.268913 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:23:49.268932 systemd-logind[1177]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:23:49.269802 systemd-logind[1177]: Removed session 24. Feb 8 23:23:49.301000 audit[4847]: USER_ACCT pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.302194 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:49.302000 audit[4847]: CRED_ACQ pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.302000 audit[4847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcc5a4580 a2=3 a3=0 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:49.302000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:49.303214 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:49.306311 systemd-logind[1177]: New session 25 of user core. Feb 8 23:23:49.307042 systemd[1]: Started session-25.scope. Feb 8 23:23:49.310000 audit[4847]: USER_START pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.311000 audit[4852]: CRED_ACQ pid=4852 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.409362 sshd[4847]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:49.409000 audit[4847]: USER_END pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.409000 audit[4847]: CRED_DISP pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:49.411588 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:47324.service: Deactivated successfully. Feb 8 23:23:49.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.76:22-10.0.0.1:47324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:49.412454 systemd-logind[1177]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:23:49.412492 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:23:49.413265 systemd-logind[1177]: Removed session 25. Feb 8 23:23:50.077925 systemd-networkd[1070]: cali1878cd2eaf3: Gained IPv6LL Feb 8 23:23:50.746683 systemd[1]: run-containerd-runc-k8s.io-9de929b19ac122eb246996f64071b38405fcb01c35b3918e3b439c8a9ef2af24-runc.VuV1Fb.mount: Deactivated successfully. Feb 8 23:23:52.881638 env[1196]: time="2024-02-08T23:23:52.881594754Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:52.883413 env[1196]: time="2024-02-08T23:23:52.883388922Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:52.885145 env[1196]: time="2024-02-08T23:23:52.885084303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:52.886824 env[1196]: time="2024-02-08T23:23:52.886800884Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:52.887477 env[1196]: time="2024-02-08T23:23:52.887455248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 8 23:23:52.889333 env[1196]: time="2024-02-08T23:23:52.889311373Z" level=info msg="CreateContainer within sandbox \"8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 8 23:23:52.899430 env[1196]: time="2024-02-08T23:23:52.899370601Z" level=info msg="CreateContainer within sandbox \"8e633fb0f2d0656866d6f9cf30ef043f2f99823376ee42199221a8e83451870d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50605fc48091a7a328a2ee5433ea9f4fcf6840b1742e7f9f50c406b10dfd9625\"" Feb 8 23:23:52.900094 env[1196]: time="2024-02-08T23:23:52.899994898Z" level=info msg="StartContainer for \"50605fc48091a7a328a2ee5433ea9f4fcf6840b1742e7f9f50c406b10dfd9625\"" Feb 8 23:23:52.962404 env[1196]: time="2024-02-08T23:23:52.962365797Z" level=info msg="StartContainer for \"50605fc48091a7a328a2ee5433ea9f4fcf6840b1742e7f9f50c406b10dfd9625\" returns successfully" Feb 8 23:23:53.206000 audit[4946]: NETFILTER_CFG table=filter:130 family=2 entries=32 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.215209 kernel: kauditd_printk_skb: 66 callbacks suppressed Feb 8 23:23:53.215369 kernel: audit: type=1325 audit(1707434633.206:481): table=filter:130 family=2 entries=32 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.215401 kernel: audit: type=1300 audit(1707434633.206:481): arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc5e08add0 a2=0 a3=7ffc5e08adbc items=0 ppid=2312 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.215430 kernel: audit: type=1327 audit(1707434633.206:481): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.206000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc5e08add0 a2=0 a3=7ffc5e08adbc items=0 ppid=2312 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.207000 audit[4946]: NETFILTER_CFG table=nat:131 family=2 entries=78 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.207000 audit[4946]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc5e08add0 a2=0 a3=7ffc5e08adbc items=0 ppid=2312 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.225522 kernel: audit: type=1325 audit(1707434633.207:482): table=nat:131 family=2 entries=78 op=nft_register_rule pid=4946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.225605 kernel: audit: type=1300 audit(1707434633.207:482): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc5e08add0 a2=0 a3=7ffc5e08adbc items=0 ppid=2312 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.225647 kernel: audit: type=1327 audit(1707434633.207:482): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.871882 kubelet[2143]: I0208 23:23:53.871848 2143 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d84f5f897-4lnfm" podStartSLOduration=-9.223372029982965e+09 pod.CreationTimestamp="2024-02-08 23:23:47 +0000 UTC" firstStartedPulling="2024-02-08 23:23:49.035364856 +0000 UTC m=+96.548784119" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:23:53.870968501 +0000 UTC m=+101.384387754" watchObservedRunningTime="2024-02-08 23:23:53.871809429 +0000 UTC m=+101.385228682" Feb 8 23:23:53.910000 audit[4978]: NETFILTER_CFG table=filter:132 family=2 entries=32 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.910000 audit[4978]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffd70231670 a2=0 a3=7ffd7023165c items=0 ppid=2312 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.917078 kernel: audit: type=1325 audit(1707434633.910:483): table=filter:132 family=2 entries=32 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.917133 kernel: audit: type=1300 audit(1707434633.910:483): arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffd70231670 a2=0 a3=7ffd7023165c items=0 ppid=2312 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.917156 kernel: audit: type=1327 audit(1707434633.910:483): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.912000 audit[4978]: NETFILTER_CFG table=nat:133 family=2 entries=78 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:53.912000 audit[4978]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd70231670 a2=0 a3=7ffd7023165c items=0 ppid=2312 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:53.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:53.925786 kernel: audit: type=1325 audit(1707434633.912:484): table=nat:133 family=2 entries=78 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:54.411903 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:47332.service. Feb 8 23:23:54.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.76:22-10.0.0.1:47332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:54.444000 audit[4981]: USER_ACCT pid=4981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.445535 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:54.445000 audit[4981]: CRED_ACQ pid=4981 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.445000 audit[4981]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd9790490 a2=3 a3=0 items=0 ppid=1 pid=4981 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:54.445000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:54.446365 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:54.449301 systemd-logind[1177]: New session 26 of user core. Feb 8 23:23:54.450208 systemd[1]: Started session-26.scope. Feb 8 23:23:54.453000 audit[4981]: USER_START pid=4981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.454000 audit[4984]: CRED_ACQ pid=4984 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.550072 sshd[4981]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:54.550000 audit[4981]: USER_END pid=4981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.550000 audit[4981]: CRED_DISP pid=4981 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:54.552636 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:47332.service: Deactivated successfully. Feb 8 23:23:54.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.76:22-10.0.0.1:47332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:54.553509 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:23:54.554513 systemd-logind[1177]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:23:54.555275 systemd-logind[1177]: Removed session 26. Feb 8 23:23:58.212000 audit[5023]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:58.214787 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 8 23:23:58.214839 kernel: audit: type=1325 audit(1707434638.212:494): table=filter:134 family=2 entries=20 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:58.212000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffef9888430 a2=0 a3=7ffef988841c items=0 ppid=2312 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:58.218703 kernel: audit: type=1300 audit(1707434638.212:494): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffef9888430 a2=0 a3=7ffef988841c items=0 ppid=2312 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:58.218847 kernel: audit: type=1327 audit(1707434638.212:494): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:58.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:58.215000 audit[5023]: NETFILTER_CFG table=nat:135 family=2 entries=162 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:58.215000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffef9888430 a2=0 a3=7ffef988841c items=0 ppid=2312 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:58.229227 kernel: audit: type=1325 audit(1707434638.215:495): table=nat:135 family=2 entries=162 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:23:58.229273 kernel: audit: type=1300 audit(1707434638.215:495): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffef9888430 a2=0 a3=7ffef988841c items=0 ppid=2312 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:58.229293 kernel: audit: type=1327 audit(1707434638.215:495): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:58.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:23:59.553733 systemd[1]: Started sshd@26-10.0.0.76:22-10.0.0.1:55320.service. Feb 8 23:23:59.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.76:22-10.0.0.1:55320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:59.556788 kernel: audit: type=1130 audit(1707434639.553:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.76:22-10.0.0.1:55320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:59.587000 audit[5027]: USER_ACCT pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.589064 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:59.591150 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:59.590000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.594300 kernel: audit: type=1101 audit(1707434639.587:497): pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.594359 kernel: audit: type=1103 audit(1707434639.590:498): pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.594383 kernel: audit: type=1006 audit(1707434639.590:499): pid=5027 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 8 23:23:59.595579 systemd[1]: Started session-27.scope. Feb 8 23:23:59.590000 audit[5027]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7133acc0 a2=3 a3=0 items=0 ppid=1 pid=5027 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:59.590000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:23:59.595772 systemd-logind[1177]: New session 27 of user core. Feb 8 23:23:59.599000 audit[5027]: USER_START pid=5027 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.600000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.694039 sshd[5027]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:59.694000 audit[5027]: USER_END pid=5027 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.694000 audit[5027]: CRED_DISP pid=5027 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:23:59.696412 systemd[1]: sshd@26-10.0.0.76:22-10.0.0.1:55320.service: Deactivated successfully. Feb 8 23:23:59.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.76:22-10.0.0.1:55320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:59.697255 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:23:59.698032 systemd-logind[1177]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:23:59.698726 systemd-logind[1177]: Removed session 27. Feb 8 23:24:04.696930 systemd[1]: Started sshd@27-10.0.0.76:22-10.0.0.1:55328.service. Feb 8 23:24:04.701303 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 8 23:24:04.701414 kernel: audit: type=1130 audit(1707434644.696:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.76:22-10.0.0.1:55328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:04.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.76:22-10.0.0.1:55328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:04.730000 audit[5061]: USER_ACCT pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.734349 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 55328 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:24:04.734539 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:04.737368 kernel: audit: type=1101 audit(1707434644.730:506): pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.737421 kernel: audit: type=1103 audit(1707434644.733:507): pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.733000 audit[5061]: CRED_ACQ pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.739336 kernel: audit: type=1006 audit(1707434644.733:508): pid=5061 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 8 23:24:04.742480 kernel: audit: type=1300 audit(1707434644.733:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff201f1390 a2=3 a3=0 items=0 ppid=1 pid=5061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:04.733000 audit[5061]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff201f1390 a2=3 a3=0 items=0 ppid=1 pid=5061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:04.733000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:04.745518 kernel: audit: type=1327 audit(1707434644.733:508): proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:04.746599 systemd[1]: Started session-28.scope. Feb 8 23:24:04.747492 systemd-logind[1177]: New session 28 of user core. Feb 8 23:24:04.751000 audit[5061]: USER_START pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.755821 kernel: audit: type=1105 audit(1707434644.751:509): pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.755000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.758803 kernel: audit: type=1103 audit(1707434644.755:510): pid=5064 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.852844 sshd[5061]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:04.852000 audit[5061]: USER_END pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.855059 systemd[1]: sshd@27-10.0.0.76:22-10.0.0.1:55328.service: Deactivated successfully. Feb 8 23:24:04.852000 audit[5061]: CRED_DISP pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.856180 systemd[1]: session-28.scope: Deactivated successfully. Feb 8 23:24:04.856549 systemd-logind[1177]: Session 28 logged out. Waiting for processes to exit. Feb 8 23:24:04.857481 systemd-logind[1177]: Removed session 28. Feb 8 23:24:04.858916 kernel: audit: type=1106 audit(1707434644.852:511): pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.859020 kernel: audit: type=1104 audit(1707434644.852:512): pid=5061 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:04.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.76:22-10.0.0.1:55328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:06.624404 kubelet[2143]: E0208 23:24:06.624367 2143 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:09.855846 systemd[1]: Started sshd@28-10.0.0.76:22-10.0.0.1:42088.service. Feb 8 23:24:09.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.76:22-10.0.0.1:42088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:09.856802 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:24:09.856930 kernel: audit: type=1130 audit(1707434649.855:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.76:22-10.0.0.1:42088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:09.886000 audit[5079]: USER_ACCT pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.887291 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:24:09.889211 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:09.888000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.893819 kernel: audit: type=1101 audit(1707434649.886:515): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.893881 kernel: audit: type=1103 audit(1707434649.888:516): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.893910 kernel: audit: type=1006 audit(1707434649.888:517): pid=5079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 8 23:24:09.888000 audit[5079]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeee36aac0 a2=3 a3=0 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:09.897816 systemd-logind[1177]: New session 29 of user core. Feb 8 23:24:09.898708 systemd[1]: Started session-29.scope. Feb 8 23:24:09.899568 kernel: audit: type=1300 audit(1707434649.888:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeee36aac0 a2=3 a3=0 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:09.899620 kernel: audit: type=1327 audit(1707434649.888:517): proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:09.888000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:09.902000 audit[5079]: USER_START pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.903000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.908431 kernel: audit: type=1105 audit(1707434649.902:518): pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.908502 kernel: audit: type=1103 audit(1707434649.903:519): pid=5083 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.997510 sshd[5079]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:09.997000 audit[5079]: USER_END pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.999406 systemd[1]: sshd@28-10.0.0.76:22-10.0.0.1:42088.service: Deactivated successfully. Feb 8 23:24:10.000255 systemd[1]: session-29.scope: Deactivated successfully. Feb 8 23:24:09.997000 audit[5079]: CRED_DISP pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:10.003782 kernel: audit: type=1106 audit(1707434649.997:520): pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:10.003834 kernel: audit: type=1104 audit(1707434649.997:521): pid=5079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:09.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.76:22-10.0.0.1:42088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:10.004358 systemd-logind[1177]: Session 29 logged out. Waiting for processes to exit. Feb 8 23:24:10.005063 systemd-logind[1177]: Removed session 29. Feb 8 23:24:12.619462 env[1196]: time="2024-02-08T23:24:12.619416182Z" level=info msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.656 [WARNING][5134] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--v9kwz-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"04f8b771-04a3-4156-98db-f84147d5ca2e", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405", Pod:"coredns-787d4945fb-v9kwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5b500c48d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.656 [INFO][5134] k8s.go 578: Cleaning up netns ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.656 [INFO][5134] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" iface="eth0" netns="" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.656 [INFO][5134] k8s.go 585: Releasing IP address(es) ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.656 [INFO][5134] utils.go 188: Calico CNI releasing IP address ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.674 [INFO][5142] ipam_plugin.go 415: Releasing address using handleID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.674 [INFO][5142] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.674 [INFO][5142] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.681 [WARNING][5142] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.681 [INFO][5142] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.683 [INFO][5142] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:12.685973 env[1196]: 2024-02-08 23:24:12.684 [INFO][5134] k8s.go 591: Teardown processing complete. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.686437 env[1196]: time="2024-02-08T23:24:12.685992939Z" level=info msg="TearDown network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" successfully" Feb 8 23:24:12.686437 env[1196]: time="2024-02-08T23:24:12.686023708Z" level=info msg="StopPodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" returns successfully" Feb 8 23:24:12.686593 env[1196]: time="2024-02-08T23:24:12.686552073Z" level=info msg="RemovePodSandbox for \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" Feb 8 23:24:12.686777 env[1196]: time="2024-02-08T23:24:12.686591548Z" level=info msg="Forcibly stopping sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\"" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.717 [WARNING][5164] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--v9kwz-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"04f8b771-04a3-4156-98db-f84147d5ca2e", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"603f11c30e8b3987cb44cd27cf3d4c87cef329921d06447641cd857c15276405", Pod:"coredns-787d4945fb-v9kwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5b500c48d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.717 [INFO][5164] k8s.go 578: Cleaning up netns ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.717 [INFO][5164] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" iface="eth0" netns="" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.717 [INFO][5164] k8s.go 585: Releasing IP address(es) ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.717 [INFO][5164] utils.go 188: Calico CNI releasing IP address ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.733 [INFO][5172] ipam_plugin.go 415: Releasing address using handleID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.733 [INFO][5172] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.733 [INFO][5172] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.740 [WARNING][5172] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.740 [INFO][5172] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" HandleID="k8s-pod-network.5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Workload="localhost-k8s-coredns--787d4945fb--v9kwz-eth0" Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.743 [INFO][5172] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:12.745930 env[1196]: 2024-02-08 23:24:12.744 [INFO][5164] k8s.go 591: Teardown processing complete. ContainerID="5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb" Feb 8 23:24:12.746393 env[1196]: time="2024-02-08T23:24:12.745952406Z" level=info msg="TearDown network for sandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" successfully" Feb 8 23:24:12.752999 env[1196]: time="2024-02-08T23:24:12.752962774Z" level=info msg="RemovePodSandbox \"5d36cbb73258363dd08902f4449be93ccb6ac791a0400583039810607bdd69bb\" returns successfully" Feb 8 23:24:12.753467 env[1196]: time="2024-02-08T23:24:12.753442155Z" level=info msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.786 [WARNING][5195] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0", GenerateName:"calico-kube-controllers-c96d6f8c9-", Namespace:"calico-system", SelfLink:"", UID:"e684c1fb-85af-423e-8c6b-15288ce2126a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96d6f8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0", Pod:"calico-kube-controllers-c96d6f8c9-hs9cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f68d1edd6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.786 [INFO][5195] k8s.go 578: Cleaning up netns ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.786 [INFO][5195] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" iface="eth0" netns="" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.786 [INFO][5195] k8s.go 585: Releasing IP address(es) ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.786 [INFO][5195] utils.go 188: Calico CNI releasing IP address ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.803 [INFO][5202] ipam_plugin.go 415: Releasing address using handleID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.803 [INFO][5202] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.803 [INFO][5202] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.810 [WARNING][5202] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.810 [INFO][5202] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.811 [INFO][5202] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:12.814911 env[1196]: 2024-02-08 23:24:12.812 [INFO][5195] k8s.go 591: Teardown processing complete. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.815383 env[1196]: time="2024-02-08T23:24:12.814943584Z" level=info msg="TearDown network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" successfully" Feb 8 23:24:12.815383 env[1196]: time="2024-02-08T23:24:12.814982809Z" level=info msg="StopPodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" returns successfully" Feb 8 23:24:12.815565 env[1196]: time="2024-02-08T23:24:12.815522284Z" level=info msg="RemovePodSandbox for \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" Feb 8 23:24:12.815720 env[1196]: time="2024-02-08T23:24:12.815564134Z" level=info msg="Forcibly stopping sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\"" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.847 [WARNING][5226] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0", GenerateName:"calico-kube-controllers-c96d6f8c9-", Namespace:"calico-system", SelfLink:"", UID:"e684c1fb-85af-423e-8c6b-15288ce2126a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96d6f8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06e190ea12ef92f5ad2f32eb8fd17118faf014de174afdb75ceb849ee12bb3f0", Pod:"calico-kube-controllers-c96d6f8c9-hs9cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f68d1edd6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.847 [INFO][5226] k8s.go 578: Cleaning up netns ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.847 [INFO][5226] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" iface="eth0" netns="" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.847 [INFO][5226] k8s.go 585: Releasing IP address(es) ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.847 [INFO][5226] utils.go 188: Calico CNI releasing IP address ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.872 [INFO][5234] ipam_plugin.go 415: Releasing address using handleID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.872 [INFO][5234] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.872 [INFO][5234] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.882 [WARNING][5234] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.882 [INFO][5234] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" HandleID="k8s-pod-network.9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Workload="localhost-k8s-calico--kube--controllers--c96d6f8c9--hs9cp-eth0" Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.884 [INFO][5234] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:12.888615 env[1196]: 2024-02-08 23:24:12.885 [INFO][5226] k8s.go 591: Teardown processing complete. ContainerID="9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8" Feb 8 23:24:12.888615 env[1196]: time="2024-02-08T23:24:12.887922392Z" level=info msg="TearDown network for sandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" successfully" Feb 8 23:24:12.891702 env[1196]: time="2024-02-08T23:24:12.891664025Z" level=info msg="RemovePodSandbox \"9fffcb052bb60be8837b85d7f90d1b14e71326a9d15eb1a653c97d9aed2590c8\" returns successfully" Feb 8 23:24:12.893176 env[1196]: time="2024-02-08T23:24:12.893117338Z" level=info msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.930 [WARNING][5259] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--bvd5v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"53d36018-7aad-443b-be14-946096d7c23e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c", Pod:"coredns-787d4945fb-bvd5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4355ef0074f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.931 [INFO][5259] k8s.go 578: Cleaning up netns ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.931 [INFO][5259] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" iface="eth0" netns="" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.931 [INFO][5259] k8s.go 585: Releasing IP address(es) ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.931 [INFO][5259] utils.go 188: Calico CNI releasing IP address ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.949 [INFO][5266] ipam_plugin.go 415: Releasing address using handleID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.949 [INFO][5266] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.949 [INFO][5266] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.956 [WARNING][5266] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.956 [INFO][5266] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.957 [INFO][5266] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:12.960681 env[1196]: 2024-02-08 23:24:12.958 [INFO][5259] k8s.go 591: Teardown processing complete. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:12.961177 env[1196]: time="2024-02-08T23:24:12.960711741Z" level=info msg="TearDown network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" successfully" Feb 8 23:24:12.961177 env[1196]: time="2024-02-08T23:24:12.960749963Z" level=info msg="StopPodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" returns successfully" Feb 8 23:24:12.961289 env[1196]: time="2024-02-08T23:24:12.961242420Z" level=info msg="RemovePodSandbox for \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" Feb 8 23:24:12.961339 env[1196]: time="2024-02-08T23:24:12.961296843Z" level=info msg="Forcibly stopping sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\"" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:12.993 [WARNING][5289] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--bvd5v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"53d36018-7aad-443b-be14-946096d7c23e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43f199dd6921b16cb62ed7165b40609f33578b4fe43571899eac96c98544640c", Pod:"coredns-787d4945fb-bvd5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4355ef0074f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:12.993 [INFO][5289] k8s.go 578: Cleaning up netns ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:12.993 [INFO][5289] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" iface="eth0" netns="" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:12.993 [INFO][5289] k8s.go 585: Releasing IP address(es) ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:12.993 [INFO][5289] utils.go 188: Calico CNI releasing IP address ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.009 [INFO][5297] ipam_plugin.go 415: Releasing address using handleID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.009 [INFO][5297] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.009 [INFO][5297] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.015 [WARNING][5297] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.015 [INFO][5297] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" HandleID="k8s-pod-network.f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Workload="localhost-k8s-coredns--787d4945fb--bvd5v-eth0" Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.016 [INFO][5297] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:13.019547 env[1196]: 2024-02-08 23:24:13.018 [INFO][5289] k8s.go 591: Teardown processing complete. ContainerID="f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69" Feb 8 23:24:13.019547 env[1196]: time="2024-02-08T23:24:13.019553494Z" level=info msg="TearDown network for sandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" successfully" Feb 8 23:24:13.022589 env[1196]: time="2024-02-08T23:24:13.022559489Z" level=info msg="RemovePodSandbox \"f0cb2e7115ec5e52d99a0f38071ec36e096515c2620b058f1880ec257700bb69\" returns successfully" Feb 8 23:24:13.023135 env[1196]: time="2024-02-08T23:24:13.023089737Z" level=info msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.053 [WARNING][5320] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k779c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f044ac1-9cb8-43bc-bcbe-22f291a59d64", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826", Pod:"csi-node-driver-k779c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali38834df9488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.053 [INFO][5320] k8s.go 578: Cleaning up netns ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.053 [INFO][5320] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" iface="eth0" netns="" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.053 [INFO][5320] k8s.go 585: Releasing IP address(es) ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.053 [INFO][5320] utils.go 188: Calico CNI releasing IP address ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.070 [INFO][5327] ipam_plugin.go 415: Releasing address using handleID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.071 [INFO][5327] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.071 [INFO][5327] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.078 [WARNING][5327] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.078 [INFO][5327] ipam_plugin.go 443: Releasing address using workloadID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.080 [INFO][5327] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:13.083230 env[1196]: 2024-02-08 23:24:13.081 [INFO][5320] k8s.go 591: Teardown processing complete. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.083671 env[1196]: time="2024-02-08T23:24:13.083243356Z" level=info msg="TearDown network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" successfully" Feb 8 23:24:13.083671 env[1196]: time="2024-02-08T23:24:13.083281068Z" level=info msg="StopPodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" returns successfully" Feb 8 23:24:13.083844 env[1196]: time="2024-02-08T23:24:13.083802790Z" level=info msg="RemovePodSandbox for \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" Feb 8 23:24:13.083895 env[1196]: time="2024-02-08T23:24:13.083843628Z" level=info msg="Forcibly stopping sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\"" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.118 [WARNING][5350] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k779c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f044ac1-9cb8-43bc-bcbe-22f291a59d64", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742c9008179938bd8fb20ffb5cde6165532a2171223b5dc4c831b54e61955826", Pod:"csi-node-driver-k779c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali38834df9488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.118 [INFO][5350] k8s.go 578: Cleaning up netns ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.118 [INFO][5350] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" iface="eth0" netns="" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.118 [INFO][5350] k8s.go 585: Releasing IP address(es) ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.118 [INFO][5350] utils.go 188: Calico CNI releasing IP address ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.136 [INFO][5357] ipam_plugin.go 415: Releasing address using handleID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.136 [INFO][5357] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.136 [INFO][5357] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.142 [WARNING][5357] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.142 [INFO][5357] ipam_plugin.go 443: Releasing address using workloadID ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" HandleID="k8s-pod-network.85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Workload="localhost-k8s-csi--node--driver--k779c-eth0" Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.143 [INFO][5357] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:24:13.147072 env[1196]: 2024-02-08 23:24:13.145 [INFO][5350] k8s.go 591: Teardown processing complete. ContainerID="85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f" Feb 8 23:24:13.147072 env[1196]: time="2024-02-08T23:24:13.147050792Z" level=info msg="TearDown network for sandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" successfully" Feb 8 23:24:13.150556 env[1196]: time="2024-02-08T23:24:13.150510169Z" level=info msg="RemovePodSandbox \"85f61fa97f21959a12811647c2feb79e0e3976e1eb83f6065d196a366551b53f\" returns successfully" Feb 8 23:24:15.000908 systemd[1]: Started sshd@29-10.0.0.76:22-10.0.0.1:42100.service. Feb 8 23:24:14.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.76:22-10.0.0.1:42100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:15.001882 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:24:15.001970 kernel: audit: type=1130 audit(1707434654.999:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.76:22-10.0.0.1:42100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:24:15.037000 audit[5365]: USER_ACCT pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.039508 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 42100 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:24:15.041675 sshd[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:15.039000 audit[5365]: CRED_ACQ pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.044569 kernel: audit: type=1101 audit(1707434655.037:524): pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.044628 kernel: audit: type=1103 audit(1707434655.039:525): pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.044652 kernel: audit: type=1006 audit(1707434655.039:526): pid=5365 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Feb 8 23:24:15.046050 kernel: audit: type=1300 audit(1707434655.039:526): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0d3bc0e0 a2=3 a3=0 items=0 ppid=1 pid=5365 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:15.039000 audit[5365]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0d3bc0e0 a2=3 a3=0 items=0 ppid=1 pid=5365 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:24:15.045726 systemd-logind[1177]: New session 30 of user core. Feb 8 23:24:15.046503 systemd[1]: Started session-30.scope. Feb 8 23:24:15.048549 kernel: audit: type=1327 audit(1707434655.039:526): proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:15.039000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:24:15.050000 audit[5365]: USER_START pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.052000 audit[5368]: CRED_ACQ pid=5368 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.058572 kernel: audit: type=1105 audit(1707434655.050:527): pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.058639 kernel: audit: type=1103 audit(1707434655.052:528): pid=5368 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.151392 sshd[5365]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:15.150000 audit[5365]: USER_END pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.153579 systemd[1]: sshd@29-10.0.0.76:22-10.0.0.1:42100.service: Deactivated successfully. Feb 8 23:24:15.154329 systemd[1]: session-30.scope: Deactivated successfully. Feb 8 23:24:15.150000 audit[5365]: CRED_DISP pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.156702 systemd-logind[1177]: Session 30 logged out. Waiting for processes to exit. Feb 8 23:24:15.157571 systemd-logind[1177]: Removed session 30. Feb 8 23:24:15.159171 kernel: audit: type=1106 audit(1707434655.150:529): pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.159245 kernel: audit: type=1104 audit(1707434655.150:530): pid=5365 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 8 23:24:15.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.76:22-10.0.0.1:42100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'